Test Report: Docker_Linux_crio_arm64 21801

                    
                      3dc60e2e5dc0007721440fd051e7cba5635b79e7:2025-10-27:42091
                    
                

Test fail (36/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.34
35 TestAddons/parallel/Registry 15.17
36 TestAddons/parallel/RegistryCreds 0.49
37 TestAddons/parallel/Ingress 144.45
38 TestAddons/parallel/InspektorGadget 5.33
39 TestAddons/parallel/MetricsServer 5.39
41 TestAddons/parallel/CSI 43.83
42 TestAddons/parallel/Headlamp 3.06
43 TestAddons/parallel/CloudSpanner 6.31
44 TestAddons/parallel/LocalPath 8.47
45 TestAddons/parallel/NvidiaDevicePlugin 5.27
46 TestAddons/parallel/Yakd 6.27
97 TestFunctional/parallel/ServiceCmdConnect 603.47
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.09
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.29
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.27
128 TestFunctional/parallel/ServiceCmd/DeployApp 600.79
129 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.31
131 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.22
132 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.38
146 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
147 TestFunctional/parallel/ServiceCmd/Format 0.39
148 TestFunctional/parallel/ServiceCmd/URL 0.39
191 TestJSONOutput/pause/Command 2.48
197 TestJSONOutput/unpause/Command 1.88
281 TestPause/serial/Pause 6.49
296 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.44
305 TestStartStop/group/old-k8s-version/serial/Pause 8.07
309 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.53
314 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.48
321 TestStartStop/group/no-preload/serial/Pause 8
327 TestStartStop/group/embed-certs/serial/Pause 7.24
331 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 3.42
333 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 3.28
343 TestStartStop/group/newest-cni/serial/Pause 8.45
348 TestStartStop/group/default-k8s-diff-port/serial/Pause 8.03
x
+
TestAddons/serial/Volcano (0.34s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-101592 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-101592 addons disable volcano --alsologtostderr -v=1: exit status 11 (338.683708ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 18:59:44.266163  274612 out.go:360] Setting OutFile to fd 1 ...
	I1027 18:59:44.267050  274612 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:59:44.267089  274612 out.go:374] Setting ErrFile to fd 2...
	I1027 18:59:44.267113  274612 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:59:44.267426  274612 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 18:59:44.267798  274612 mustload.go:65] Loading cluster: addons-101592
	I1027 18:59:44.268274  274612 config.go:182] Loaded profile config "addons-101592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:59:44.268313  274612 addons.go:606] checking whether the cluster is paused
	I1027 18:59:44.268453  274612 config.go:182] Loaded profile config "addons-101592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:59:44.268485  274612 host.go:66] Checking if "addons-101592" exists ...
	I1027 18:59:44.269018  274612 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:59:44.305479  274612 ssh_runner.go:195] Run: systemctl --version
	I1027 18:59:44.305536  274612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:59:44.330068  274612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:59:44.450456  274612 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 18:59:44.450622  274612 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 18:59:44.487245  274612 cri.go:89] found id: "3e163786b302c708b17fe3ccf72adc340a75e221cd0896244768e1510b5aa88f"
	I1027 18:59:44.487279  274612 cri.go:89] found id: "838eb4978f205b327ad70c9fe726ab47413794195e774fac18ab9ae207723d6a"
	I1027 18:59:44.487287  274612 cri.go:89] found id: "d6943420da6a4a5c627e3e1859fcefbdc0f116ec559255515e3c76875ef303ec"
	I1027 18:59:44.487291  274612 cri.go:89] found id: "7e2b5aeafd0057d1e7d6c15f07b26a4833c3369e948af52f6ce2a927043ec7ce"
	I1027 18:59:44.487294  274612 cri.go:89] found id: "2aa2da6d8f06b0fc78c255affa19eec7d586298271ac54c0800166ccd1369669"
	I1027 18:59:44.487298  274612 cri.go:89] found id: "50e0b6b85b8a72a9fc162999003b05f4316b2f1d308c55874ee95912e0e1662e"
	I1027 18:59:44.487320  274612 cri.go:89] found id: "cd4e827788ce9c8dcbacf66655ce3bbee1fa66b94eb5c687b755fb2b6f5a5cc1"
	I1027 18:59:44.487339  274612 cri.go:89] found id: "7f055e0b328c77feb86cbf8ba19ed9f73334ca7b97c3bc8004db97b398fbfd75"
	I1027 18:59:44.487343  274612 cri.go:89] found id: "7e472e3b179a0264586ada3ba6103e35c58134a6e668cb3b2b6bf2e04e56940a"
	I1027 18:59:44.487350  274612 cri.go:89] found id: "a9d1dbc41feea92d008c9764ac8993cf482b1be7c976bf1221012c77bb4740da"
	I1027 18:59:44.487358  274612 cri.go:89] found id: "bfa31a6efbb78b65e0d3b92e9683ba4e99525bf68f9335010d133d41e467bee4"
	I1027 18:59:44.487362  274612 cri.go:89] found id: "458122abc2a23963c258beece13449337f47c2ccdc075406236a8a13a8063201"
	I1027 18:59:44.487365  274612 cri.go:89] found id: "2d95c80ef718cfc6804388ffc97159e55cf48ac7c6673231ecb8239d50e7d811"
	I1027 18:59:44.487368  274612 cri.go:89] found id: "a33d881a2daa0476651bd4a08fc46730205cfe03c3681178bb615a5e9635926e"
	I1027 18:59:44.487371  274612 cri.go:89] found id: "8ebcbe2c9975f2a8719a3839dcd24a15c292a81c5c76b171625fac6a82dd851d"
	I1027 18:59:44.487376  274612 cri.go:89] found id: "fe0ea9b1d2cf2cbefbf8c8c1c5b40f8d072efd2d489b534d519c969d9f5078d6"
	I1027 18:59:44.487405  274612 cri.go:89] found id: "f81e4711fbc01c871625a0d7da229dd580feb4c12934b3a38b50966b57a4259b"
	I1027 18:59:44.487411  274612 cri.go:89] found id: "28587c37519da029d28ad619eccddb0f9cf6be1a53a5a3ca6648894a187e90d2"
	I1027 18:59:44.487421  274612 cri.go:89] found id: "1e5034229198563b54bd8e5f173e0492eee9301465992e61e3c894cb36e53dd6"
	I1027 18:59:44.487424  274612 cri.go:89] found id: "d5efbcd6024e7af2a933432a33e2b4197973d93eab2d9c966cb6e488d871f23b"
	I1027 18:59:44.487430  274612 cri.go:89] found id: "4a399d9b30f5e68ec2fe6ac2e4b287b9821a6d963956dc63355919c90bbdbff6"
	I1027 18:59:44.487434  274612 cri.go:89] found id: "752e65ab367c93dbf18c03c14f825896a94d2b17a2a40668c50b5961b7e50972"
	I1027 18:59:44.487437  274612 cri.go:89] found id: "02d35f7174bb35a983c70121d66fa26a8b0d33acfa279db1bc1b3d623b4f8f09"
	I1027 18:59:44.487440  274612 cri.go:89] found id: ""
	I1027 18:59:44.487505  274612 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 18:59:44.506928  274612 out.go:203] 
	W1027 18:59:44.509949  274612 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T18:59:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T18:59:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 18:59:44.509983  274612 out.go:285] * 
	* 
	W1027 18:59:44.518117  274612 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 18:59:44.523170  274612 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-101592 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.34s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 7.550681ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-jvgtv" [590c8e4c-843c-4795-a036-d9969aaa1662] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004236018s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-k87sb" [459056d8-2856-482a-8f3a-86430bc52e0e] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003683261s
addons_test.go:392: (dbg) Run:  kubectl --context addons-101592 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-101592 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-101592 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.668574923s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-101592 ip
2025/10/27 19:00:10 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-101592 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-101592 addons disable registry --alsologtostderr -v=1: exit status 11 (257.402326ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:00:10.816005  275573 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:00:10.816819  275573 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:00:10.816863  275573 out.go:374] Setting ErrFile to fd 2...
	I1027 19:00:10.816888  275573 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:00:10.817175  275573 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 19:00:10.817505  275573 mustload.go:65] Loading cluster: addons-101592
	I1027 19:00:10.817942  275573 config.go:182] Loaded profile config "addons-101592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:00:10.817989  275573 addons.go:606] checking whether the cluster is paused
	I1027 19:00:10.818118  275573 config.go:182] Loaded profile config "addons-101592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:00:10.818153  275573 host.go:66] Checking if "addons-101592" exists ...
	I1027 19:00:10.818649  275573 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 19:00:10.836806  275573 ssh_runner.go:195] Run: systemctl --version
	I1027 19:00:10.836865  275573 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 19:00:10.854344  275573 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 19:00:10.961513  275573 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 19:00:10.961608  275573 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 19:00:10.992743  275573 cri.go:89] found id: "3e163786b302c708b17fe3ccf72adc340a75e221cd0896244768e1510b5aa88f"
	I1027 19:00:10.992813  275573 cri.go:89] found id: "838eb4978f205b327ad70c9fe726ab47413794195e774fac18ab9ae207723d6a"
	I1027 19:00:10.992825  275573 cri.go:89] found id: "d6943420da6a4a5c627e3e1859fcefbdc0f116ec559255515e3c76875ef303ec"
	I1027 19:00:10.992830  275573 cri.go:89] found id: "7e2b5aeafd0057d1e7d6c15f07b26a4833c3369e948af52f6ce2a927043ec7ce"
	I1027 19:00:10.992834  275573 cri.go:89] found id: "2aa2da6d8f06b0fc78c255affa19eec7d586298271ac54c0800166ccd1369669"
	I1027 19:00:10.992838  275573 cri.go:89] found id: "50e0b6b85b8a72a9fc162999003b05f4316b2f1d308c55874ee95912e0e1662e"
	I1027 19:00:10.992848  275573 cri.go:89] found id: "cd4e827788ce9c8dcbacf66655ce3bbee1fa66b94eb5c687b755fb2b6f5a5cc1"
	I1027 19:00:10.992852  275573 cri.go:89] found id: "7f055e0b328c77feb86cbf8ba19ed9f73334ca7b97c3bc8004db97b398fbfd75"
	I1027 19:00:10.992855  275573 cri.go:89] found id: "7e472e3b179a0264586ada3ba6103e35c58134a6e668cb3b2b6bf2e04e56940a"
	I1027 19:00:10.992862  275573 cri.go:89] found id: "a9d1dbc41feea92d008c9764ac8993cf482b1be7c976bf1221012c77bb4740da"
	I1027 19:00:10.992869  275573 cri.go:89] found id: "bfa31a6efbb78b65e0d3b92e9683ba4e99525bf68f9335010d133d41e467bee4"
	I1027 19:00:10.992873  275573 cri.go:89] found id: "458122abc2a23963c258beece13449337f47c2ccdc075406236a8a13a8063201"
	I1027 19:00:10.992876  275573 cri.go:89] found id: "2d95c80ef718cfc6804388ffc97159e55cf48ac7c6673231ecb8239d50e7d811"
	I1027 19:00:10.992879  275573 cri.go:89] found id: "a33d881a2daa0476651bd4a08fc46730205cfe03c3681178bb615a5e9635926e"
	I1027 19:00:10.992883  275573 cri.go:89] found id: "8ebcbe2c9975f2a8719a3839dcd24a15c292a81c5c76b171625fac6a82dd851d"
	I1027 19:00:10.992888  275573 cri.go:89] found id: "fe0ea9b1d2cf2cbefbf8c8c1c5b40f8d072efd2d489b534d519c969d9f5078d6"
	I1027 19:00:10.992894  275573 cri.go:89] found id: "f81e4711fbc01c871625a0d7da229dd580feb4c12934b3a38b50966b57a4259b"
	I1027 19:00:10.992898  275573 cri.go:89] found id: "28587c37519da029d28ad619eccddb0f9cf6be1a53a5a3ca6648894a187e90d2"
	I1027 19:00:10.992901  275573 cri.go:89] found id: "1e5034229198563b54bd8e5f173e0492eee9301465992e61e3c894cb36e53dd6"
	I1027 19:00:10.992904  275573 cri.go:89] found id: "d5efbcd6024e7af2a933432a33e2b4197973d93eab2d9c966cb6e488d871f23b"
	I1027 19:00:10.992909  275573 cri.go:89] found id: "4a399d9b30f5e68ec2fe6ac2e4b287b9821a6d963956dc63355919c90bbdbff6"
	I1027 19:00:10.992920  275573 cri.go:89] found id: "752e65ab367c93dbf18c03c14f825896a94d2b17a2a40668c50b5961b7e50972"
	I1027 19:00:10.992923  275573 cri.go:89] found id: "02d35f7174bb35a983c70121d66fa26a8b0d33acfa279db1bc1b3d623b4f8f09"
	I1027 19:00:10.992925  275573 cri.go:89] found id: ""
	I1027 19:00:10.993002  275573 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 19:00:11.005934  275573 out.go:203] 
	W1027 19:00:11.007228  275573 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:00:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:00:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 19:00:11.007245  275573 out.go:285] * 
	* 
	W1027 19:00:11.013132  275573 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 19:00:11.016054  275573 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-101592 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (15.17s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.49s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.968607ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-101592
addons_test.go:332: (dbg) Run:  kubectl --context addons-101592 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-101592 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-101592 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (257.610196ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:00:43.032423  276673 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:00:43.033258  276673 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:00:43.033276  276673 out.go:374] Setting ErrFile to fd 2...
	I1027 19:00:43.033282  276673 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:00:43.033693  276673 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 19:00:43.034119  276673 mustload.go:65] Loading cluster: addons-101592
	I1027 19:00:43.034576  276673 config.go:182] Loaded profile config "addons-101592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:00:43.034598  276673 addons.go:606] checking whether the cluster is paused
	I1027 19:00:43.034742  276673 config.go:182] Loaded profile config "addons-101592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:00:43.034760  276673 host.go:66] Checking if "addons-101592" exists ...
	I1027 19:00:43.035375  276673 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 19:00:43.052763  276673 ssh_runner.go:195] Run: systemctl --version
	I1027 19:00:43.052824  276673 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 19:00:43.070843  276673 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 19:00:43.173931  276673 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 19:00:43.174025  276673 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 19:00:43.205772  276673 cri.go:89] found id: "3e163786b302c708b17fe3ccf72adc340a75e221cd0896244768e1510b5aa88f"
	I1027 19:00:43.205791  276673 cri.go:89] found id: "838eb4978f205b327ad70c9fe726ab47413794195e774fac18ab9ae207723d6a"
	I1027 19:00:43.205796  276673 cri.go:89] found id: "d6943420da6a4a5c627e3e1859fcefbdc0f116ec559255515e3c76875ef303ec"
	I1027 19:00:43.205816  276673 cri.go:89] found id: "7e2b5aeafd0057d1e7d6c15f07b26a4833c3369e948af52f6ce2a927043ec7ce"
	I1027 19:00:43.205820  276673 cri.go:89] found id: "2aa2da6d8f06b0fc78c255affa19eec7d586298271ac54c0800166ccd1369669"
	I1027 19:00:43.205824  276673 cri.go:89] found id: "50e0b6b85b8a72a9fc162999003b05f4316b2f1d308c55874ee95912e0e1662e"
	I1027 19:00:43.205829  276673 cri.go:89] found id: "cd4e827788ce9c8dcbacf66655ce3bbee1fa66b94eb5c687b755fb2b6f5a5cc1"
	I1027 19:00:43.205832  276673 cri.go:89] found id: "7f055e0b328c77feb86cbf8ba19ed9f73334ca7b97c3bc8004db97b398fbfd75"
	I1027 19:00:43.205836  276673 cri.go:89] found id: "7e472e3b179a0264586ada3ba6103e35c58134a6e668cb3b2b6bf2e04e56940a"
	I1027 19:00:43.205841  276673 cri.go:89] found id: "a9d1dbc41feea92d008c9764ac8993cf482b1be7c976bf1221012c77bb4740da"
	I1027 19:00:43.205844  276673 cri.go:89] found id: "bfa31a6efbb78b65e0d3b92e9683ba4e99525bf68f9335010d133d41e467bee4"
	I1027 19:00:43.205847  276673 cri.go:89] found id: "458122abc2a23963c258beece13449337f47c2ccdc075406236a8a13a8063201"
	I1027 19:00:43.205851  276673 cri.go:89] found id: "2d95c80ef718cfc6804388ffc97159e55cf48ac7c6673231ecb8239d50e7d811"
	I1027 19:00:43.205854  276673 cri.go:89] found id: "a33d881a2daa0476651bd4a08fc46730205cfe03c3681178bb615a5e9635926e"
	I1027 19:00:43.205857  276673 cri.go:89] found id: "8ebcbe2c9975f2a8719a3839dcd24a15c292a81c5c76b171625fac6a82dd851d"
	I1027 19:00:43.205861  276673 cri.go:89] found id: "fe0ea9b1d2cf2cbefbf8c8c1c5b40f8d072efd2d489b534d519c969d9f5078d6"
	I1027 19:00:43.205864  276673 cri.go:89] found id: "f81e4711fbc01c871625a0d7da229dd580feb4c12934b3a38b50966b57a4259b"
	I1027 19:00:43.205868  276673 cri.go:89] found id: "28587c37519da029d28ad619eccddb0f9cf6be1a53a5a3ca6648894a187e90d2"
	I1027 19:00:43.205871  276673 cri.go:89] found id: "1e5034229198563b54bd8e5f173e0492eee9301465992e61e3c894cb36e53dd6"
	I1027 19:00:43.205874  276673 cri.go:89] found id: "d5efbcd6024e7af2a933432a33e2b4197973d93eab2d9c966cb6e488d871f23b"
	I1027 19:00:43.205878  276673 cri.go:89] found id: "4a399d9b30f5e68ec2fe6ac2e4b287b9821a6d963956dc63355919c90bbdbff6"
	I1027 19:00:43.205881  276673 cri.go:89] found id: "752e65ab367c93dbf18c03c14f825896a94d2b17a2a40668c50b5961b7e50972"
	I1027 19:00:43.205884  276673 cri.go:89] found id: "02d35f7174bb35a983c70121d66fa26a8b0d33acfa279db1bc1b3d623b4f8f09"
	I1027 19:00:43.205887  276673 cri.go:89] found id: ""
	I1027 19:00:43.205938  276673 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 19:00:43.220451  276673 out.go:203] 
	W1027 19:00:43.223280  276673 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:00:43Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:00:43Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 19:00:43.223304  276673 out.go:285] * 
	* 
	W1027 19:00:43.229385  276673 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 19:00:43.232419  276673 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-101592 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.49s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (144.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-101592 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-101592 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-101592 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [837ffee0-989d-47f8-8845-e8d80d20af65] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [837ffee0-989d-47f8-8845-e8d80d20af65] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003571559s
I1027 19:00:31.427788  267880 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-101592 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-101592 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.622401277s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-101592 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-101592 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-101592
helpers_test.go:243: (dbg) docker inspect addons-101592:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6440f0423a17a97cc1b07b80749bdf1a62b64d235db78febedbc667a9b5028d6",
	        "Created": "2025-10-27T18:57:16.770574053Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 269038,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T18:57:16.832542873Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/6440f0423a17a97cc1b07b80749bdf1a62b64d235db78febedbc667a9b5028d6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6440f0423a17a97cc1b07b80749bdf1a62b64d235db78febedbc667a9b5028d6/hostname",
	        "HostsPath": "/var/lib/docker/containers/6440f0423a17a97cc1b07b80749bdf1a62b64d235db78febedbc667a9b5028d6/hosts",
	        "LogPath": "/var/lib/docker/containers/6440f0423a17a97cc1b07b80749bdf1a62b64d235db78febedbc667a9b5028d6/6440f0423a17a97cc1b07b80749bdf1a62b64d235db78febedbc667a9b5028d6-json.log",
	        "Name": "/addons-101592",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-101592:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-101592",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6440f0423a17a97cc1b07b80749bdf1a62b64d235db78febedbc667a9b5028d6",
	                "LowerDir": "/var/lib/docker/overlay2/3bed65a7e0bb8d93131cdb29b4732b3022c6d31a7f852e0b3b4ecc20b11d66ac-init/diff:/var/lib/docker/overlay2/0567218bbe05e8b59b2aef7ad82032d6716040c1f9d9d73e7de8682101079f76/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3bed65a7e0bb8d93131cdb29b4732b3022c6d31a7f852e0b3b4ecc20b11d66ac/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3bed65a7e0bb8d93131cdb29b4732b3022c6d31a7f852e0b3b4ecc20b11d66ac/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3bed65a7e0bb8d93131cdb29b4732b3022c6d31a7f852e0b3b4ecc20b11d66ac/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-101592",
	                "Source": "/var/lib/docker/volumes/addons-101592/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-101592",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-101592",
	                "name.minikube.sigs.k8s.io": "addons-101592",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5dd906d8fbdda779de355066361d1aff27470acef9ca178b571101e47212b552",
	            "SandboxKey": "/var/run/docker/netns/5dd906d8fbdd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-101592": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:be:94:d1:5f:cc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "322e74e7b4664bad1b9706c3bcec00f024011c8e602d4eba745a9fe7ed7c8852",
	                    "EndpointID": "341d9976d109d95f2f607893bc7d1435407d10e62d754b1dcf765939c04ace01",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-101592",
	                        "6440f0423a17"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-101592 -n addons-101592
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-101592 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-101592 logs -n 25: (1.853561657s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-980377                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-980377 │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:56 UTC │
	│ start   │ --download-only -p binary-mirror-324835 --alsologtostderr --binary-mirror http://127.0.0.1:42369 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-324835   │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │                     │
	│ delete  │ -p binary-mirror-324835                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-324835   │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:56 UTC │
	│ addons  │ disable dashboard -p addons-101592                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-101592          │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │                     │
	│ addons  │ enable dashboard -p addons-101592                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-101592          │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │                     │
	│ start   │ -p addons-101592 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-101592          │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:59 UTC │
	│ addons  │ addons-101592 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-101592          │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │                     │
	│ addons  │ addons-101592 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-101592          │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │                     │
	│ addons  │ enable headlamp -p addons-101592 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-101592          │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │                     │
	│ addons  │ addons-101592 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-101592          │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │                     │
	│ ip      │ addons-101592 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-101592          │ jenkins │ v1.37.0 │ 27 Oct 25 19:00 UTC │ 27 Oct 25 19:00 UTC │
	│ addons  │ addons-101592 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-101592          │ jenkins │ v1.37.0 │ 27 Oct 25 19:00 UTC │                     │
	│ addons  │ addons-101592 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-101592          │ jenkins │ v1.37.0 │ 27 Oct 25 19:00 UTC │                     │
	│ addons  │ addons-101592 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-101592          │ jenkins │ v1.37.0 │ 27 Oct 25 19:00 UTC │                     │
	│ ssh     │ addons-101592 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-101592          │ jenkins │ v1.37.0 │ 27 Oct 25 19:00 UTC │                     │
	│ addons  │ addons-101592 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-101592          │ jenkins │ v1.37.0 │ 27 Oct 25 19:00 UTC │                     │
	│ addons  │ addons-101592 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-101592          │ jenkins │ v1.37.0 │ 27 Oct 25 19:00 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-101592                                                                                                                                                                                                                                                                                                                                                                                           │ addons-101592          │ jenkins │ v1.37.0 │ 27 Oct 25 19:00 UTC │ 27 Oct 25 19:00 UTC │
	│ addons  │ addons-101592 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-101592          │ jenkins │ v1.37.0 │ 27 Oct 25 19:00 UTC │                     │
	│ addons  │ addons-101592 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-101592          │ jenkins │ v1.37.0 │ 27 Oct 25 19:00 UTC │                     │
	│ addons  │ addons-101592 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-101592          │ jenkins │ v1.37.0 │ 27 Oct 25 19:00 UTC │                     │
	│ ssh     │ addons-101592 ssh cat /opt/local-path-provisioner/pvc-c9f40c89-0f13-48bb-bf71-f70c4746ee6e_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-101592          │ jenkins │ v1.37.0 │ 27 Oct 25 19:01 UTC │ 27 Oct 25 19:01 UTC │
	│ addons  │ addons-101592 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-101592          │ jenkins │ v1.37.0 │ 27 Oct 25 19:01 UTC │                     │
	│ addons  │ addons-101592 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-101592          │ jenkins │ v1.37.0 │ 27 Oct 25 19:01 UTC │                     │
	│ ip      │ addons-101592 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-101592          │ jenkins │ v1.37.0 │ 27 Oct 25 19:02 UTC │ 27 Oct 25 19:02 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 18:56:51
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 18:56:51.344310  268639 out.go:360] Setting OutFile to fd 1 ...
	I1027 18:56:51.344912  268639 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:56:51.344930  268639 out.go:374] Setting ErrFile to fd 2...
	I1027 18:56:51.344937  268639 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:56:51.345430  268639 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 18:56:51.345908  268639 out.go:368] Setting JSON to false
	I1027 18:56:51.346688  268639 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5964,"bootTime":1761585448,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1027 18:56:51.346756  268639 start.go:141] virtualization:  
	I1027 18:56:51.349903  268639 out.go:179] * [addons-101592] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 18:56:51.353540  268639 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 18:56:51.353628  268639 notify.go:220] Checking for updates...
	I1027 18:56:51.359121  268639 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 18:56:51.361952  268639 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 18:56:51.364783  268639 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-266035/.minikube
	I1027 18:56:51.367616  268639 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 18:56:51.370388  268639 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 18:56:51.373392  268639 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 18:56:51.402878  268639 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 18:56:51.403029  268639 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 18:56:51.463882  268639 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-27 18:56:51.455127171 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 18:56:51.463987  268639 docker.go:318] overlay module found
	I1027 18:56:51.467217  268639 out.go:179] * Using the docker driver based on user configuration
	I1027 18:56:51.470144  268639 start.go:305] selected driver: docker
	I1027 18:56:51.470170  268639 start.go:925] validating driver "docker" against <nil>
	I1027 18:56:51.470185  268639 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 18:56:51.470890  268639 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 18:56:51.523621  268639 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-27 18:56:51.514664242 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 18:56:51.523786  268639 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1027 18:56:51.524020  268639 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 18:56:51.526853  268639 out.go:179] * Using Docker driver with root privileges
	I1027 18:56:51.529709  268639 cni.go:84] Creating CNI manager for ""
	I1027 18:56:51.529781  268639 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 18:56:51.529795  268639 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 18:56:51.529884  268639 start.go:349] cluster config:
	{Name:addons-101592 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-101592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1027 18:56:51.533012  268639 out.go:179] * Starting "addons-101592" primary control-plane node in "addons-101592" cluster
	I1027 18:56:51.535875  268639 cache.go:123] Beginning downloading kic base image for docker with crio
	I1027 18:56:51.538791  268639 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 18:56:51.541546  268639 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 18:56:51.541603  268639 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1027 18:56:51.541616  268639 cache.go:58] Caching tarball of preloaded images
	I1027 18:56:51.541640  268639 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 18:56:51.541701  268639 preload.go:233] Found /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1027 18:56:51.541711  268639 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 18:56:51.542094  268639 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/config.json ...
	I1027 18:56:51.542125  268639 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/config.json: {Name:mk045c40dedbb543bd714b134e668126fe1c7694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:51.556460  268639 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1027 18:56:51.556625  268639 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1027 18:56:51.556644  268639 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1027 18:56:51.556649  268639 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1027 18:56:51.556656  268639 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1027 18:56:51.556662  268639 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1027 18:57:09.337139  268639 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1027 18:57:09.337179  268639 cache.go:232] Successfully downloaded all kic artifacts
	I1027 18:57:09.337209  268639 start.go:360] acquireMachinesLock for addons-101592: {Name:mk6d8d9111d5dfe86e292b53fd2763254776e2b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 18:57:09.337333  268639 start.go:364] duration metric: took 103.908µs to acquireMachinesLock for "addons-101592"
	I1027 18:57:09.337365  268639 start.go:93] Provisioning new machine with config: &{Name:addons-101592 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-101592 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 18:57:09.337438  268639 start.go:125] createHost starting for "" (driver="docker")
	I1027 18:57:09.340936  268639 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1027 18:57:09.341202  268639 start.go:159] libmachine.API.Create for "addons-101592" (driver="docker")
	I1027 18:57:09.341253  268639 client.go:168] LocalClient.Create starting
	I1027 18:57:09.341386  268639 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem
	I1027 18:57:09.570318  268639 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem
	I1027 18:57:09.981254  268639 cli_runner.go:164] Run: docker network inspect addons-101592 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1027 18:57:09.996548  268639 cli_runner.go:211] docker network inspect addons-101592 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1027 18:57:09.996642  268639 network_create.go:284] running [docker network inspect addons-101592] to gather additional debugging logs...
	I1027 18:57:09.996663  268639 cli_runner.go:164] Run: docker network inspect addons-101592
	W1027 18:57:10.012494  268639 cli_runner.go:211] docker network inspect addons-101592 returned with exit code 1
	I1027 18:57:10.012525  268639 network_create.go:287] error running [docker network inspect addons-101592]: docker network inspect addons-101592: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-101592 not found
	I1027 18:57:10.012540  268639 network_create.go:289] output of [docker network inspect addons-101592]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-101592 not found
	
	** /stderr **
	I1027 18:57:10.012641  268639 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 18:57:10.036040  268639 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001918aa0}
	I1027 18:57:10.036085  268639 network_create.go:124] attempt to create docker network addons-101592 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1027 18:57:10.036165  268639 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-101592 addons-101592
	I1027 18:57:10.095667  268639 network_create.go:108] docker network addons-101592 192.168.49.0/24 created
	I1027 18:57:10.095716  268639 kic.go:121] calculated static IP "192.168.49.2" for the "addons-101592" container
	I1027 18:57:10.095795  268639 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1027 18:57:10.111089  268639 cli_runner.go:164] Run: docker volume create addons-101592 --label name.minikube.sigs.k8s.io=addons-101592 --label created_by.minikube.sigs.k8s.io=true
	I1027 18:57:10.130170  268639 oci.go:103] Successfully created a docker volume addons-101592
	I1027 18:57:10.130268  268639 cli_runner.go:164] Run: docker run --rm --name addons-101592-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-101592 --entrypoint /usr/bin/test -v addons-101592:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1027 18:57:12.298196  268639 cli_runner.go:217] Completed: docker run --rm --name addons-101592-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-101592 --entrypoint /usr/bin/test -v addons-101592:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (2.167887458s)
	I1027 18:57:12.298229  268639 oci.go:107] Successfully prepared a docker volume addons-101592
	I1027 18:57:12.298255  268639 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 18:57:12.298273  268639 kic.go:194] Starting extracting preloaded images to volume ...
	I1027 18:57:12.298341  268639 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-101592:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1027 18:57:16.696204  268639 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-101592:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.397811373s)
	I1027 18:57:16.696234  268639 kic.go:203] duration metric: took 4.397957921s to extract preloaded images to volume ...
	W1027 18:57:16.696382  268639 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1027 18:57:16.696491  268639 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1027 18:57:16.756137  268639 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-101592 --name addons-101592 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-101592 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-101592 --network addons-101592 --ip 192.168.49.2 --volume addons-101592:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1027 18:57:17.059533  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Running}}
	I1027 18:57:17.080775  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:17.107061  268639 cli_runner.go:164] Run: docker exec addons-101592 stat /var/lib/dpkg/alternatives/iptables
	I1027 18:57:17.173121  268639 oci.go:144] the created container "addons-101592" has a running status.
	I1027 18:57:17.173153  268639 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa...
	I1027 18:57:17.327548  268639 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1027 18:57:17.352781  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:17.374890  268639 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1027 18:57:17.374914  268639 kic_runner.go:114] Args: [docker exec --privileged addons-101592 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1027 18:57:17.431709  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:17.463645  268639 machine.go:93] provisionDockerMachine start ...
	I1027 18:57:17.463864  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:17.501069  268639 main.go:141] libmachine: Using SSH client type: native
	I1027 18:57:17.501423  268639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1027 18:57:17.501434  268639 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 18:57:17.503514  268639 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1027 18:57:20.654368  268639 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-101592
	
	I1027 18:57:20.654392  268639 ubuntu.go:182] provisioning hostname "addons-101592"
	I1027 18:57:20.654460  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:20.671213  268639 main.go:141] libmachine: Using SSH client type: native
	I1027 18:57:20.671526  268639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1027 18:57:20.671543  268639 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-101592 && echo "addons-101592" | sudo tee /etc/hostname
	I1027 18:57:20.831851  268639 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-101592
	
	I1027 18:57:20.831942  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:20.851902  268639 main.go:141] libmachine: Using SSH client type: native
	I1027 18:57:20.852218  268639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1027 18:57:20.852237  268639 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-101592' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-101592/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-101592' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 18:57:20.998975  268639 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 18:57:20.999027  268639 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21801-266035/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-266035/.minikube}
	I1027 18:57:20.999047  268639 ubuntu.go:190] setting up certificates
	I1027 18:57:20.999065  268639 provision.go:84] configureAuth start
	I1027 18:57:20.999121  268639 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-101592
	I1027 18:57:21.016601  268639 provision.go:143] copyHostCerts
	I1027 18:57:21.016698  268639 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem (1123 bytes)
	I1027 18:57:21.016860  268639 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem (1675 bytes)
	I1027 18:57:21.016918  268639 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem (1078 bytes)
	I1027 18:57:21.016965  268639 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem org=jenkins.addons-101592 san=[127.0.0.1 192.168.49.2 addons-101592 localhost minikube]
	I1027 18:57:21.332569  268639 provision.go:177] copyRemoteCerts
	I1027 18:57:21.332637  268639 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 18:57:21.332679  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:21.349510  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:57:21.455164  268639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 18:57:21.472974  268639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1027 18:57:21.490811  268639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1027 18:57:21.507969  268639 provision.go:87] duration metric: took 508.890093ms to configureAuth
	I1027 18:57:21.507994  268639 ubuntu.go:206] setting minikube options for container-runtime
	I1027 18:57:21.508185  268639 config.go:182] Loaded profile config "addons-101592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:57:21.508299  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:21.525664  268639 main.go:141] libmachine: Using SSH client type: native
	I1027 18:57:21.526011  268639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1027 18:57:21.526026  268639 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 18:57:21.784028  268639 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 18:57:21.784113  268639 machine.go:96] duration metric: took 4.320377305s to provisionDockerMachine
	I1027 18:57:21.784140  268639 client.go:171] duration metric: took 12.442876081s to LocalClient.Create
	I1027 18:57:21.784187  268639 start.go:167] duration metric: took 12.442983664s to libmachine.API.Create "addons-101592"
	I1027 18:57:21.784211  268639 start.go:293] postStartSetup for "addons-101592" (driver="docker")
	I1027 18:57:21.784235  268639 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 18:57:21.784318  268639 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 18:57:21.784423  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:21.802390  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:57:21.907824  268639 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 18:57:21.910935  268639 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 18:57:21.910964  268639 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 18:57:21.911000  268639 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-266035/.minikube/addons for local assets ...
	I1027 18:57:21.911068  268639 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-266035/.minikube/files for local assets ...
	I1027 18:57:21.911097  268639 start.go:296] duration metric: took 126.866646ms for postStartSetup
	I1027 18:57:21.911412  268639 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-101592
	I1027 18:57:21.927435  268639 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/config.json ...
	I1027 18:57:21.927738  268639 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 18:57:21.928042  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:21.947817  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:57:22.048221  268639 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 18:57:22.053118  268639 start.go:128] duration metric: took 12.715661932s to createHost
	I1027 18:57:22.053144  268639 start.go:83] releasing machines lock for "addons-101592", held for 12.715797199s
	I1027 18:57:22.053219  268639 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-101592
	I1027 18:57:22.070820  268639 ssh_runner.go:195] Run: cat /version.json
	I1027 18:57:22.070873  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:22.070901  268639 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 18:57:22.070957  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:22.091446  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:57:22.107477  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:57:22.286110  268639 ssh_runner.go:195] Run: systemctl --version
	I1027 18:57:22.292273  268639 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 18:57:22.327439  268639 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 18:57:22.331498  268639 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 18:57:22.331566  268639 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 18:57:22.359263  268639 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1027 18:57:22.359285  268639 start.go:495] detecting cgroup driver to use...
	I1027 18:57:22.359318  268639 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1027 18:57:22.359368  268639 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 18:57:22.376291  268639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 18:57:22.388549  268639 docker.go:218] disabling cri-docker service (if available) ...
	I1027 18:57:22.388615  268639 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 18:57:22.406207  268639 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 18:57:22.424890  268639 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 18:57:22.546017  268639 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 18:57:22.672621  268639 docker.go:234] disabling docker service ...
	I1027 18:57:22.672726  268639 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 18:57:22.694438  268639 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 18:57:22.707859  268639 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 18:57:22.822221  268639 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 18:57:22.942216  268639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 18:57:22.954534  268639 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 18:57:22.967923  268639 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 18:57:22.968038  268639 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:57:22.976498  268639 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 18:57:22.976566  268639 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:57:22.984728  268639 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:57:22.992956  268639 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:57:23.001100  268639 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 18:57:23.008913  268639 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:57:23.018345  268639 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:57:23.031958  268639 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:57:23.040438  268639 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 18:57:23.047966  268639 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 18:57:23.054954  268639 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 18:57:23.159325  268639 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 18:57:23.280722  268639 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 18:57:23.280840  268639 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 18:57:23.284720  268639 start.go:563] Will wait 60s for crictl version
	I1027 18:57:23.284804  268639 ssh_runner.go:195] Run: which crictl
	I1027 18:57:23.288557  268639 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 18:57:23.312542  268639 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 18:57:23.312661  268639 ssh_runner.go:195] Run: crio --version
	I1027 18:57:23.342294  268639 ssh_runner.go:195] Run: crio --version
	I1027 18:57:23.371912  268639 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 18:57:23.374767  268639 cli_runner.go:164] Run: docker network inspect addons-101592 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 18:57:23.392051  268639 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1027 18:57:23.395676  268639 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 18:57:23.405280  268639 kubeadm.go:883] updating cluster {Name:addons-101592 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-101592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 18:57:23.405420  268639 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 18:57:23.405501  268639 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 18:57:23.438906  268639 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 18:57:23.438934  268639 crio.go:433] Images already preloaded, skipping extraction
	I1027 18:57:23.439026  268639 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 18:57:23.463549  268639 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 18:57:23.463577  268639 cache_images.go:85] Images are preloaded, skipping loading
	I1027 18:57:23.463586  268639 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1027 18:57:23.463671  268639 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-101592 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-101592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 18:57:23.463753  268639 ssh_runner.go:195] Run: crio config
	I1027 18:57:23.516609  268639 cni.go:84] Creating CNI manager for ""
	I1027 18:57:23.516630  268639 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 18:57:23.516655  268639 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 18:57:23.516705  268639 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-101592 NodeName:addons-101592 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 18:57:23.516856  268639 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-101592"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 18:57:23.516931  268639 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 18:57:23.524912  268639 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 18:57:23.525013  268639 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 18:57:23.532698  268639 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1027 18:57:23.545455  268639 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 18:57:23.558521  268639 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1027 18:57:23.571575  268639 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1027 18:57:23.575048  268639 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 18:57:23.584989  268639 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 18:57:23.707287  268639 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 18:57:23.722305  268639 certs.go:69] Setting up /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592 for IP: 192.168.49.2
	I1027 18:57:23.722375  268639 certs.go:195] generating shared ca certs ...
	I1027 18:57:23.722406  268639 certs.go:227] acquiring lock for ca certs: {Name:mk172548fc54811b51040cc0201d01eb4b3dd19b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:57:23.722577  268639 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21801-266035/.minikube/ca.key
	I1027 18:57:23.791332  268639 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt ...
	I1027 18:57:23.791406  268639 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt: {Name:mkab07fc960645e058a12a29888618199563b2d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:57:23.791609  268639 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-266035/.minikube/ca.key ...
	I1027 18:57:23.791630  268639 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/ca.key: {Name:mkf51e48da48d79f5b53f47b013afa79ea5d78e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:57:23.791725  268639 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.key
	I1027 18:57:24.536456  268639 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.crt ...
	I1027 18:57:24.536489  268639 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.crt: {Name:mkbe7b0e104d91908762b0382eb112f017333bf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:57:24.536687  268639 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.key ...
	I1027 18:57:24.536702  268639 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.key: {Name:mkd483c24f05e0e063c384b3cc3e67b2223c5e0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:57:24.536776  268639 certs.go:257] generating profile certs ...
	I1027 18:57:24.536836  268639 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.key
	I1027 18:57:24.536855  268639 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.crt with IP's: []
	I1027 18:57:24.751302  268639 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.crt ...
	I1027 18:57:24.751336  268639 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.crt: {Name:mk5686a3a2e49db78e669096e059dd37e074b59e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:57:24.751531  268639 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.key ...
	I1027 18:57:24.751545  268639 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.key: {Name:mk1dc59feefb4276e6fb4bcc52102e4ae7b37c45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:57:24.751652  268639 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/apiserver.key.7bf29f12
	I1027 18:57:24.751673  268639 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/apiserver.crt.7bf29f12 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1027 18:57:26.047734  268639 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/apiserver.crt.7bf29f12 ...
	I1027 18:57:26.047769  268639 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/apiserver.crt.7bf29f12: {Name:mkb44ef4b98fce9af3b9c9e924ab7fac7612f78e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:57:26.047985  268639 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/apiserver.key.7bf29f12 ...
	I1027 18:57:26.047999  268639 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/apiserver.key.7bf29f12: {Name:mk169b2631dbedc732b16442eedc054ed2811995 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:57:26.048089  268639 certs.go:382] copying /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/apiserver.crt.7bf29f12 -> /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/apiserver.crt
	I1027 18:57:26.048176  268639 certs.go:386] copying /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/apiserver.key.7bf29f12 -> /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/apiserver.key
	I1027 18:57:26.048230  268639 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/proxy-client.key
	I1027 18:57:26.048254  268639 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/proxy-client.crt with IP's: []
	I1027 18:57:26.925460  268639 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/proxy-client.crt ...
	I1027 18:57:26.925492  268639 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/proxy-client.crt: {Name:mk667682862ed7e86975a2bb3d5fed80ecd1608c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:57:26.925683  268639 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/proxy-client.key ...
	I1027 18:57:26.925697  268639 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/proxy-client.key: {Name:mk46a22031d759578af616ea845bee9a4972120e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:57:26.925896  268639 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 18:57:26.925934  268639 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem (1078 bytes)
	I1027 18:57:26.926029  268639 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem (1123 bytes)
	I1027 18:57:26.926072  268639 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem (1675 bytes)
	I1027 18:57:26.926636  268639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 18:57:26.944584  268639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 18:57:26.962295  268639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 18:57:26.979609  268639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 18:57:26.997025  268639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1027 18:57:27.015956  268639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 18:57:27.035615  268639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 18:57:27.053222  268639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 18:57:27.070087  268639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 18:57:27.087205  268639 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 18:57:27.099638  268639 ssh_runner.go:195] Run: openssl version
	I1027 18:57:27.105895  268639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 18:57:27.114239  268639 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 18:57:27.118191  268639 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1027 18:57:27.118263  268639 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 18:57:27.159444  268639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 18:57:27.167759  268639 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 18:57:27.171419  268639 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 18:57:27.171469  268639 kubeadm.go:400] StartCluster: {Name:addons-101592 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-101592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 18:57:27.171545  268639 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 18:57:27.171623  268639 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 18:57:27.199269  268639 cri.go:89] found id: ""
	I1027 18:57:27.199338  268639 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 18:57:27.207380  268639 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 18:57:27.215416  268639 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1027 18:57:27.215486  268639 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 18:57:27.223770  268639 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 18:57:27.223788  268639 kubeadm.go:157] found existing configuration files:
	
	I1027 18:57:27.223866  268639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 18:57:27.231730  268639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 18:57:27.231841  268639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 18:57:27.239229  268639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 18:57:27.246908  268639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 18:57:27.247016  268639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 18:57:27.254430  268639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 18:57:27.262322  268639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 18:57:27.262438  268639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 18:57:27.269907  268639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 18:57:27.277824  268639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 18:57:27.277929  268639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 18:57:27.285534  268639 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 18:57:27.325388  268639 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1027 18:57:27.325680  268639 kubeadm.go:318] [preflight] Running pre-flight checks
	I1027 18:57:27.350776  268639 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1027 18:57:27.350934  268639 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1027 18:57:27.351026  268639 kubeadm.go:318] OS: Linux
	I1027 18:57:27.351111  268639 kubeadm.go:318] CGROUPS_CPU: enabled
	I1027 18:57:27.351202  268639 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1027 18:57:27.351285  268639 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1027 18:57:27.351374  268639 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1027 18:57:27.351459  268639 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1027 18:57:27.351544  268639 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1027 18:57:27.351627  268639 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1027 18:57:27.351713  268639 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1027 18:57:27.351800  268639 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1027 18:57:27.415055  268639 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 18:57:27.415247  268639 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 18:57:27.415389  268639 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 18:57:27.427490  268639 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 18:57:27.431694  268639 out.go:252]   - Generating certificates and keys ...
	I1027 18:57:27.431880  268639 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1027 18:57:27.432004  268639 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1027 18:57:27.745776  268639 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 18:57:28.095074  268639 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1027 18:57:28.752082  268639 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1027 18:57:28.971103  268639 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1027 18:57:29.275279  268639 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1027 18:57:29.275624  268639 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-101592 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1027 18:57:29.607454  268639 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1027 18:57:29.607804  268639 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-101592 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1027 18:57:29.962559  268639 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 18:57:31.042682  268639 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 18:57:31.216462  268639 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1027 18:57:31.216774  268639 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 18:57:32.564332  268639 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 18:57:34.217167  268639 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 18:57:34.461204  268639 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 18:57:35.385694  268639 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 18:57:35.931494  268639 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 18:57:35.932078  268639 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 18:57:35.934570  268639 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 18:57:35.938025  268639 out.go:252]   - Booting up control plane ...
	I1027 18:57:35.938134  268639 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 18:57:35.938216  268639 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 18:57:35.938288  268639 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 18:57:35.953613  268639 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 18:57:35.954121  268639 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 18:57:35.961752  268639 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 18:57:35.962082  268639 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 18:57:35.962129  268639 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1027 18:57:36.101101  268639 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 18:57:36.101228  268639 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 18:57:37.107595  268639 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.005121736s
	I1027 18:57:37.109980  268639 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 18:57:37.110086  268639 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1027 18:57:37.110421  268639 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 18:57:37.111461  268639 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 18:57:39.331823  268639 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.220786228s
	I1027 18:57:41.928158  268639 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.817301764s
	I1027 18:57:43.612092  268639 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.50144149s
	I1027 18:57:43.631068  268639 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 18:57:43.652321  268639 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 18:57:43.666198  268639 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 18:57:43.666422  268639 kubeadm.go:318] [mark-control-plane] Marking the node addons-101592 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 18:57:43.678705  268639 kubeadm.go:318] [bootstrap-token] Using token: enw9xt.cnnain00qnmfg1uu
	I1027 18:57:43.681794  268639 out.go:252]   - Configuring RBAC rules ...
	I1027 18:57:43.681928  268639 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 18:57:43.687486  268639 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 18:57:43.698913  268639 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 18:57:43.702835  268639 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 18:57:43.706798  268639 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 18:57:43.710922  268639 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 18:57:44.022325  268639 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 18:57:44.468868  268639 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1027 18:57:45.023983  268639 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1027 18:57:45.024118  268639 kubeadm.go:318] 
	I1027 18:57:45.024199  268639 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1027 18:57:45.024209  268639 kubeadm.go:318] 
	I1027 18:57:45.024291  268639 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1027 18:57:45.024297  268639 kubeadm.go:318] 
	I1027 18:57:45.024323  268639 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1027 18:57:45.024387  268639 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 18:57:45.024441  268639 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 18:57:45.024446  268639 kubeadm.go:318] 
	I1027 18:57:45.024504  268639 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1027 18:57:45.024509  268639 kubeadm.go:318] 
	I1027 18:57:45.024559  268639 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 18:57:45.024564  268639 kubeadm.go:318] 
	I1027 18:57:45.024618  268639 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1027 18:57:45.024698  268639 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 18:57:45.024770  268639 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 18:57:45.024775  268639 kubeadm.go:318] 
	I1027 18:57:45.039654  268639 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 18:57:45.039761  268639 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1027 18:57:45.039766  268639 kubeadm.go:318] 
	I1027 18:57:45.039858  268639 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token enw9xt.cnnain00qnmfg1uu \
	I1027 18:57:45.039969  268639 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:9473f077605f6866eb08c8a712af7c56a0deb8dc2a907ec8cbb4b8ae396e8f1f \
	I1027 18:57:45.039992  268639 kubeadm.go:318] 	--control-plane 
	I1027 18:57:45.039997  268639 kubeadm.go:318] 
	I1027 18:57:45.040089  268639 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1027 18:57:45.040094  268639 kubeadm.go:318] 
	I1027 18:57:45.040182  268639 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token enw9xt.cnnain00qnmfg1uu \
	I1027 18:57:45.040292  268639 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:9473f077605f6866eb08c8a712af7c56a0deb8dc2a907ec8cbb4b8ae396e8f1f 
	I1027 18:57:45.057882  268639 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1027 18:57:45.058154  268639 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1027 18:57:45.058270  268639 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 18:57:45.058294  268639 cni.go:84] Creating CNI manager for ""
	I1027 18:57:45.058304  268639 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 18:57:45.062021  268639 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1027 18:57:45.066557  268639 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1027 18:57:45.076571  268639 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 18:57:45.076595  268639 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1027 18:57:45.097879  268639 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 18:57:45.590455  268639 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 18:57:45.590587  268639 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:45.590669  268639 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-101592 minikube.k8s.io/updated_at=2025_10_27T18_57_45_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f minikube.k8s.io/name=addons-101592 minikube.k8s.io/primary=true
	I1027 18:57:45.750330  268639 ops.go:34] apiserver oom_adj: -16
	I1027 18:57:45.750415  268639 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:46.251072  268639 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:46.750951  268639 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:47.250529  268639 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:47.751459  268639 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:48.250517  268639 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:48.750450  268639 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:49.251013  268639 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:49.751305  268639 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:49.851016  268639 kubeadm.go:1113] duration metric: took 4.260471325s to wait for elevateKubeSystemPrivileges
	I1027 18:57:49.851042  268639 kubeadm.go:402] duration metric: took 22.679577507s to StartCluster
	I1027 18:57:49.851059  268639 settings.go:142] acquiring lock: {Name:mk49b9e2e9b7901c6b51e89b8b1e8ee7f9e88107 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:57:49.851166  268639 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 18:57:49.851549  268639 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/kubeconfig: {Name:mk0a03a594f8d9f9199b1c7cff7c550cec8414c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:57:49.851742  268639 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 18:57:49.851913  268639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 18:57:49.852153  268639 config.go:182] Loaded profile config "addons-101592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:57:49.852183  268639 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1027 18:57:49.852261  268639 addons.go:69] Setting yakd=true in profile "addons-101592"
	I1027 18:57:49.852274  268639 addons.go:238] Setting addon yakd=true in "addons-101592"
	I1027 18:57:49.852295  268639 host.go:66] Checking if "addons-101592" exists ...
	I1027 18:57:49.852764  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:49.852935  268639 addons.go:69] Setting inspektor-gadget=true in profile "addons-101592"
	I1027 18:57:49.852947  268639 addons.go:238] Setting addon inspektor-gadget=true in "addons-101592"
	I1027 18:57:49.852967  268639 host.go:66] Checking if "addons-101592" exists ...
	I1027 18:57:49.853347  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:49.854601  268639 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-101592"
	I1027 18:57:49.854632  268639 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-101592"
	I1027 18:57:49.854670  268639 host.go:66] Checking if "addons-101592" exists ...
	I1027 18:57:49.855154  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:49.855297  268639 addons.go:69] Setting metrics-server=true in profile "addons-101592"
	I1027 18:57:49.855362  268639 addons.go:238] Setting addon metrics-server=true in "addons-101592"
	I1027 18:57:49.855404  268639 host.go:66] Checking if "addons-101592" exists ...
	I1027 18:57:49.855953  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:49.865482  268639 out.go:179] * Verifying Kubernetes components...
	I1027 18:57:49.865667  268639 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-101592"
	I1027 18:57:49.865727  268639 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-101592"
	I1027 18:57:49.865745  268639 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-101592"
	I1027 18:57:49.865803  268639 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-101592"
	I1027 18:57:49.865836  268639 host.go:66] Checking if "addons-101592" exists ...
	I1027 18:57:49.865871  268639 addons.go:69] Setting registry=true in profile "addons-101592"
	I1027 18:57:49.865908  268639 addons.go:238] Setting addon registry=true in "addons-101592"
	I1027 18:57:49.866003  268639 host.go:66] Checking if "addons-101592" exists ...
	I1027 18:57:49.866405  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:49.866668  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:49.879058  268639 addons.go:69] Setting default-storageclass=true in profile "addons-101592"
	I1027 18:57:49.879095  268639 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-101592"
	I1027 18:57:49.879478  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:49.886513  268639 addons.go:69] Setting registry-creds=true in profile "addons-101592"
	I1027 18:57:49.886612  268639 addons.go:238] Setting addon registry-creds=true in "addons-101592"
	I1027 18:57:49.886685  268639 host.go:66] Checking if "addons-101592" exists ...
	I1027 18:57:49.890716  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:49.900026  268639 addons.go:69] Setting gcp-auth=true in profile "addons-101592"
	I1027 18:57:49.900071  268639 mustload.go:65] Loading cluster: addons-101592
	I1027 18:57:49.900302  268639 config.go:182] Loaded profile config "addons-101592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:57:49.900582  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:49.911610  268639 addons.go:69] Setting storage-provisioner=true in profile "addons-101592"
	I1027 18:57:49.911660  268639 addons.go:238] Setting addon storage-provisioner=true in "addons-101592"
	I1027 18:57:49.911696  268639 host.go:66] Checking if "addons-101592" exists ...
	I1027 18:57:49.912225  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:49.930419  268639 addons.go:69] Setting ingress=true in profile "addons-101592"
	I1027 18:57:49.930462  268639 addons.go:238] Setting addon ingress=true in "addons-101592"
	I1027 18:57:49.930509  268639 host.go:66] Checking if "addons-101592" exists ...
	I1027 18:57:49.935417  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:49.939066  268639 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-101592"
	I1027 18:57:49.939098  268639 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-101592"
	I1027 18:57:49.939478  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:49.955452  268639 addons.go:69] Setting ingress-dns=true in profile "addons-101592"
	I1027 18:57:49.955481  268639 addons.go:238] Setting addon ingress-dns=true in "addons-101592"
	I1027 18:57:49.955528  268639 host.go:66] Checking if "addons-101592" exists ...
	I1027 18:57:49.956051  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:49.958123  268639 addons.go:69] Setting volcano=true in profile "addons-101592"
	I1027 18:57:49.958157  268639 addons.go:238] Setting addon volcano=true in "addons-101592"
	I1027 18:57:49.958191  268639 host.go:66] Checking if "addons-101592" exists ...
	I1027 18:57:49.958638  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:49.865836  268639 host.go:66] Checking if "addons-101592" exists ...
	I1027 18:57:49.972390  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:49.975334  268639 addons.go:69] Setting volumesnapshots=true in profile "addons-101592"
	I1027 18:57:49.975364  268639 addons.go:238] Setting addon volumesnapshots=true in "addons-101592"
	I1027 18:57:49.975398  268639 host.go:66] Checking if "addons-101592" exists ...
	I1027 18:57:49.975861  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:49.865737  268639 addons.go:69] Setting cloud-spanner=true in profile "addons-101592"
	I1027 18:57:49.992826  268639 addons.go:238] Setting addon cloud-spanner=true in "addons-101592"
	I1027 18:57:49.992878  268639 host.go:66] Checking if "addons-101592" exists ...
	I1027 18:57:49.993348  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:50.015197  268639 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 18:57:50.051189  268639 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1027 18:57:50.074977  268639 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1027 18:57:50.094612  268639 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1027 18:57:50.123253  268639 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1027 18:57:50.123277  268639 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1027 18:57:50.123356  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:50.137026  268639 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1027 18:57:50.137776  268639 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1027 18:57:50.144317  268639 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1027 18:57:50.144643  268639 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1027 18:57:50.144720  268639 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1027 18:57:50.144795  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:50.168753  268639 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1027 18:57:50.168775  268639 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1027 18:57:50.168847  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:50.174639  268639 out.go:179]   - Using image docker.io/registry:3.0.0
	I1027 18:57:50.190922  268639 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1027 18:57:50.190970  268639 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1027 18:57:50.192435  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:50.199563  268639 addons.go:238] Setting addon default-storageclass=true in "addons-101592"
	I1027 18:57:50.199602  268639 host.go:66] Checking if "addons-101592" exists ...
	I1027 18:57:50.200041  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:50.224686  268639 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1027 18:57:50.225324  268639 host.go:66] Checking if "addons-101592" exists ...
	I1027 18:57:50.231778  268639 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1027 18:57:50.231805  268639 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1027 18:57:50.231873  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:50.265818  268639 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-101592"
	I1027 18:57:50.265866  268639 host.go:66] Checking if "addons-101592" exists ...
	I1027 18:57:50.266395  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:50.298205  268639 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1027 18:57:50.298275  268639 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1027 18:57:50.299562  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:57:50.300381  268639 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1027 18:57:50.301396  268639 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1027 18:57:50.301417  268639 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1027 18:57:50.301484  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	W1027 18:57:50.310566  268639 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1027 18:57:50.311068  268639 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1027 18:57:50.311085  268639 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1027 18:57:50.311145  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:50.313526  268639 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1027 18:57:50.313741  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:57:50.323197  268639 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1027 18:57:50.323418  268639 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1027 18:57:50.323432  268639 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1027 18:57:50.323498  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:50.323859  268639 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1027 18:57:50.324027  268639 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 18:57:50.324200  268639 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1027 18:57:50.331555  268639 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1027 18:57:50.335898  268639 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1027 18:57:50.335934  268639 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1027 18:57:50.335996  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:50.345195  268639 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 18:57:50.345220  268639 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 18:57:50.345342  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:50.350395  268639 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1027 18:57:50.350416  268639 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1027 18:57:50.350478  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:50.365640  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:57:50.366581  268639 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1027 18:57:50.366787  268639 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1027 18:57:50.375113  268639 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1027 18:57:50.375366  268639 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1027 18:57:50.375382  268639 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1027 18:57:50.375456  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:50.385852  268639 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1027 18:57:50.402578  268639 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1027 18:57:50.411172  268639 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1027 18:57:50.420826  268639 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1027 18:57:50.420951  268639 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1027 18:57:50.421058  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:50.427032  268639 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1027 18:57:50.431424  268639 out.go:179]   - Using image docker.io/busybox:stable
	I1027 18:57:50.438350  268639 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1027 18:57:50.438371  268639 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1027 18:57:50.438443  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:50.458441  268639 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 18:57:50.458507  268639 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 18:57:50.458606  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:50.493054  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:57:50.493833  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:57:50.494448  268639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 18:57:50.528985  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:57:50.529492  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:57:50.536762  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:57:50.560208  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:57:50.561057  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:57:50.567146  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:57:50.585638  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:57:50.596412  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:57:50.598238  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:57:50.614147  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	W1027 18:57:50.615157  268639 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1027 18:57:50.615185  268639 retry.go:31] will retry after 337.973811ms: ssh: handshake failed: EOF
	I1027 18:57:50.656616  268639 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 18:57:51.020012  268639 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1027 18:57:51.020089  268639 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1027 18:57:51.057381  268639 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1027 18:57:51.057405  268639 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1027 18:57:51.134953  268639 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1027 18:57:51.135090  268639 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1027 18:57:51.138231  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1027 18:57:51.146580  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1027 18:57:51.151600  268639 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:51.151665  268639 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1027 18:57:51.155903  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1027 18:57:51.213921  268639 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1027 18:57:51.214001  268639 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1027 18:57:51.215636  268639 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1027 18:57:51.215706  268639 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1027 18:57:51.243044  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1027 18:57:51.261610  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:51.267076  268639 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1027 18:57:51.267139  268639 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1027 18:57:51.383329  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1027 18:57:51.432641  268639 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1027 18:57:51.432662  268639 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1027 18:57:51.440786  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1027 18:57:51.450130  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 18:57:51.517945  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1027 18:57:51.550782  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 18:57:51.604373  268639 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1027 18:57:51.604443  268639 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1027 18:57:51.613547  268639 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1027 18:57:51.613618  268639 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1027 18:57:51.638785  268639 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1027 18:57:51.638859  268639 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1027 18:57:51.652944  268639 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1027 18:57:51.653019  268639 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1027 18:57:51.675977  268639 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1027 18:57:51.676050  268639 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1027 18:57:51.798010  268639 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1027 18:57:51.798087  268639 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1027 18:57:51.820964  268639 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1027 18:57:51.821031  268639 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1027 18:57:51.872349  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1027 18:57:51.885417  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1027 18:57:52.046798  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1027 18:57:52.057175  268639 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1027 18:57:52.057254  268639 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1027 18:57:52.061834  268639 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1027 18:57:52.061913  268639 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1027 18:57:52.159933  268639 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.503284675s)
	I1027 18:57:52.160099  268639 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.665628366s)
	I1027 18:57:52.160135  268639 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1027 18:57:52.161634  268639 node_ready.go:35] waiting up to 6m0s for node "addons-101592" to be "Ready" ...
	I1027 18:57:52.231389  268639 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1027 18:57:52.231410  268639 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1027 18:57:52.281477  268639 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1027 18:57:52.281496  268639 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1027 18:57:52.554845  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.416522301s)
	I1027 18:57:52.554933  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.408280587s)
	I1027 18:57:52.610533  268639 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1027 18:57:52.610559  268639 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1027 18:57:52.636009  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1027 18:57:52.695387  268639 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-101592" context rescaled to 1 replicas
	I1027 18:57:52.842398  268639 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1027 18:57:52.842418  268639 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1027 18:57:52.974014  268639 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1027 18:57:52.974091  268639 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1027 18:57:53.151597  268639 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1027 18:57:53.151668  268639 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1027 18:57:53.372795  268639 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1027 18:57:53.372818  268639 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1027 18:57:53.588666  268639 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1027 18:57:53.588690  268639 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1027 18:57:53.845143  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1027 18:57:54.179279  268639 node_ready.go:57] node "addons-101592" has "Ready":"False" status (will retry)
	I1027 18:57:56.108706  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.952713347s)
	I1027 18:57:56.108737  268639 addons.go:479] Verifying addon ingress=true in "addons-101592"
	I1027 18:57:56.109002  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.865887294s)
	I1027 18:57:56.109223  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.847550263s)
	W1027 18:57:56.109262  268639 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:56.109302  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.668500542s)
	I1027 18:57:56.109303  268639 retry.go:31] will retry after 155.902552ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:56.109331  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.659134218s)
	I1027 18:57:56.109279  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.725876533s)
	I1027 18:57:56.109454  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.591429513s)
	I1027 18:57:56.109507  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.55865899s)
	I1027 18:57:56.109582  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.237163226s)
	I1027 18:57:56.109815  268639 addons.go:479] Verifying addon metrics-server=true in "addons-101592"
	I1027 18:57:56.109636  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.224143468s)
	I1027 18:57:56.109659  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.062789496s)
	I1027 18:57:56.110031  268639 addons.go:479] Verifying addon registry=true in "addons-101592"
	I1027 18:57:56.109728  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.473693978s)
	W1027 18:57:56.111157  268639 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1027 18:57:56.111176  268639 retry.go:31] will retry after 199.373259ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1027 18:57:56.111964  268639 out.go:179] * Verifying ingress addon...
	I1027 18:57:56.114196  268639 out.go:179] * Verifying registry addon...
	I1027 18:57:56.117323  268639 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1027 18:57:56.117532  268639 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-101592 service yakd-dashboard -n yakd-dashboard
	
	I1027 18:57:56.119411  268639 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1027 18:57:56.124123  268639 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1027 18:57:56.124149  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1027 18:57:56.126227  268639 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1027 18:57:56.126614  268639 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1027 18:57:56.126631  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:56.266188  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:56.310829  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1027 18:57:56.493224  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.648020301s)
	I1027 18:57:56.493263  268639 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-101592"
	I1027 18:57:56.496053  268639 out.go:179] * Verifying csi-hostpath-driver addon...
	I1027 18:57:56.499368  268639 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1027 18:57:56.534750  268639 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1027 18:57:56.534776  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:56.621543  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:56.624697  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1027 18:57:56.664926  268639 node_ready.go:57] node "addons-101592" has "Ready":"False" status (will retry)
	I1027 18:57:57.002783  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:57.123730  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:57.124136  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:57.329650  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.063350377s)
	W1027 18:57:57.329704  268639 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:57.329743  268639 retry.go:31] will retry after 341.860927ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:57.502772  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:57.620805  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:57.622675  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:57.672677  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:57.900399  268639 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1027 18:57:57.900542  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:57.926161  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:57:58.003511  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:58.056790  268639 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1027 18:57:58.073991  268639 addons.go:238] Setting addon gcp-auth=true in "addons-101592"
	I1027 18:57:58.074048  268639 host.go:66] Checking if "addons-101592" exists ...
	I1027 18:57:58.074544  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:58.098740  268639 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1027 18:57:58.098815  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:58.121976  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:58.126933  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:58.132688  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:57:58.502889  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:58.621391  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:58.623634  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1027 18:57:58.665530  268639 node_ready.go:57] node "addons-101592" has "Ready":"False" status (will retry)
	I1027 18:57:59.003712  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:59.123120  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:59.125318  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:59.200402  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.88947276s)
	I1027 18:57:59.200501  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.527790412s)
	W1027 18:57:59.200535  268639 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:59.200560  268639 retry.go:31] will retry after 568.935121ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:59.200599  268639 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.101834853s)
	I1027 18:57:59.203683  268639 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1027 18:57:59.206525  268639 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1027 18:57:59.209442  268639 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1027 18:57:59.209469  268639 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1027 18:57:59.223910  268639 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1027 18:57:59.223940  268639 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1027 18:57:59.237159  268639 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1027 18:57:59.237179  268639 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1027 18:57:59.249272  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1027 18:57:59.502307  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:59.625480  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:59.626352  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:59.723968  268639 addons.go:479] Verifying addon gcp-auth=true in "addons-101592"
	I1027 18:57:59.729102  268639 out.go:179] * Verifying gcp-auth addon...
	I1027 18:57:59.733641  268639 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1027 18:57:59.743566  268639 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1027 18:57:59.743590  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:59.769979  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:58:00.002978  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:00.138145  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:00.138582  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:00.246619  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:00.504307  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:00.621295  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:00.635834  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:00.736803  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:00.848000  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.077978344s)
	W1027 18:58:00.848037  268639 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:00.848083  268639 retry.go:31] will retry after 1.036199785s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:01.003213  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:01.120886  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:01.123264  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1027 18:58:01.165604  268639 node_ready.go:57] node "addons-101592" has "Ready":"False" status (will retry)
	I1027 18:58:01.236717  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:01.503030  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:01.621833  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:01.623034  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:01.737369  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:01.884511  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:58:02.003082  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:02.122460  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:02.123316  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:02.237544  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:02.502973  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:02.628206  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:02.628751  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1027 18:58:02.716973  268639 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:02.717005  268639 retry.go:31] will retry after 703.670259ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:02.736829  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:03.002927  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:03.121192  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:03.122936  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1027 18:58:03.166197  268639 node_ready.go:57] node "addons-101592" has "Ready":"False" status (will retry)
	I1027 18:58:03.237580  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:03.421825  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:58:03.503072  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:03.622054  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:03.622297  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:03.737650  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:04.002104  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:04.120737  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:04.122472  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1027 18:58:04.229983  268639 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:04.230014  268639 retry.go:31] will retry after 2.538197086s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:04.236611  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:04.502921  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:04.620985  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:04.622680  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:04.737220  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:05.003361  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:05.122515  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:05.122642  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:05.237040  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:05.503251  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:05.621242  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:05.622115  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1027 18:58:05.665629  268639 node_ready.go:57] node "addons-101592" has "Ready":"False" status (will retry)
	I1027 18:58:05.737205  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:06.002228  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:06.122018  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:06.122323  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:06.241014  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:06.502688  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:06.621179  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:06.622509  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:06.737091  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:06.769211  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:58:07.002423  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:07.121187  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:07.123426  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:07.237075  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:07.503217  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1027 18:58:07.596508  268639 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:07.596541  268639 retry.go:31] will retry after 3.031116829s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:07.620193  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:07.621992  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:07.736398  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:08.002770  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:08.121649  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:08.123804  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1027 18:58:08.164641  268639 node_ready.go:57] node "addons-101592" has "Ready":"False" status (will retry)
	I1027 18:58:08.237472  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:08.502810  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:08.620556  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:08.623053  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:08.736743  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:09.002657  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:09.120611  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:09.122276  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:09.236568  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:09.502567  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:09.620785  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:09.622539  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:09.737255  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:10.003211  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:10.123495  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:10.124742  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1027 18:58:10.164728  268639 node_ready.go:57] node "addons-101592" has "Ready":"False" status (will retry)
	I1027 18:58:10.236542  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:10.502671  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:10.620715  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:10.622290  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:10.628398  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:58:10.737840  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:11.003290  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:11.121776  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:11.126232  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:11.240634  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 18:58:11.447584  268639 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:11.447615  268639 retry.go:31] will retry after 5.012638589s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:11.502238  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:11.620080  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:11.621708  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:11.737027  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:12.004209  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:12.120990  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:12.123373  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1027 18:58:12.165076  268639 node_ready.go:57] node "addons-101592" has "Ready":"False" status (will retry)
	I1027 18:58:12.237273  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:12.503049  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:12.621261  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:12.622661  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:12.737654  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:13.002421  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:13.120314  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:13.123688  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:13.236426  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:13.502485  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:13.621452  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:13.622278  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:13.737073  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:14.002235  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:14.121651  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:14.121818  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:14.236527  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:14.502413  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:14.620483  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:14.622439  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1027 18:58:14.665055  268639 node_ready.go:57] node "addons-101592" has "Ready":"False" status (will retry)
	I1027 18:58:14.736857  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:15.002944  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:15.122613  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:15.124155  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:15.236383  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:15.502437  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:15.622244  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:15.622610  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:15.737077  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:16.003025  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:16.121194  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:16.122134  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:16.237573  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:16.461043  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:58:16.504105  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:16.623195  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:16.624247  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1027 18:58:16.665629  268639 node_ready.go:57] node "addons-101592" has "Ready":"False" status (will retry)
	I1027 18:58:16.736869  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:17.003116  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:17.122050  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:17.123397  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:17.237481  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 18:58:17.278221  268639 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:17.278257  268639 retry.go:31] will retry after 9.258063329s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:17.502304  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:17.620190  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:17.622224  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:17.737119  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:18.003032  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:18.120877  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:18.122807  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:18.237310  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:18.502438  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:18.620642  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:18.622737  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:18.737384  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:19.003004  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:19.120896  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:19.122698  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1027 18:58:19.164560  268639 node_ready.go:57] node "addons-101592" has "Ready":"False" status (will retry)
	I1027 18:58:19.237536  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:19.502913  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:19.620750  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:19.622300  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:19.736958  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:20.003290  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:20.121163  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:20.122774  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:20.237453  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:20.502468  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:20.620269  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:20.622062  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:20.736697  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:21.002775  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:21.121114  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:21.123183  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1027 18:58:21.165318  268639 node_ready.go:57] node "addons-101592" has "Ready":"False" status (will retry)
	I1027 18:58:21.236485  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:21.502700  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:21.621235  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:21.622270  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:21.736704  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:22.002387  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:22.120355  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:22.122566  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:22.236834  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:22.502597  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:22.620639  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:22.622731  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:22.736609  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:23.002888  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:23.120785  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:23.122609  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:23.237516  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:23.502471  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:23.621505  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:23.623547  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1027 18:58:23.665626  268639 node_ready.go:57] node "addons-101592" has "Ready":"False" status (will retry)
	I1027 18:58:23.737433  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:24.002510  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:24.120482  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:24.122583  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:24.237570  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:24.502560  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:24.620728  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:24.622554  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:24.737299  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:25.002205  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:25.121096  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:25.121940  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:25.236306  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:25.502055  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:25.621145  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:25.621987  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:25.738148  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:26.002916  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:26.120821  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:26.122758  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1027 18:58:26.164484  268639 node_ready.go:57] node "addons-101592" has "Ready":"False" status (will retry)
	I1027 18:58:26.241763  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:26.503079  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:26.537138  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:58:26.620539  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:26.622329  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:26.736945  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:27.002600  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:27.122595  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:27.122869  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:27.238476  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 18:58:27.369787  268639 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:27.369822  268639 retry.go:31] will retry after 9.860564125s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:27.502798  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:27.621763  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:27.621830  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:27.736645  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:28.002350  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:28.120828  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:28.122852  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1027 18:58:28.164551  268639 node_ready.go:57] node "addons-101592" has "Ready":"False" status (will retry)
	I1027 18:58:28.237777  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:28.502839  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:28.621479  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:28.622824  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:28.737210  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:29.002258  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:29.120247  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:29.122397  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:29.236899  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:29.502735  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:29.621303  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:29.622494  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:29.737120  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:30.002922  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:30.121865  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:30.123748  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1027 18:58:30.165797  268639 node_ready.go:57] node "addons-101592" has "Ready":"False" status (will retry)
	I1027 18:58:30.236863  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:30.503002  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:30.621048  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:30.623267  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:30.736881  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:31.003580  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:31.122545  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:31.123435  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:31.237282  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:31.502367  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:31.620328  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:31.622354  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:31.737108  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:32.017695  268639 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1027 18:58:32.017726  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:32.271154  268639 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1027 18:58:32.271220  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:32.272041  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:32.272537  268639 node_ready.go:49] node "addons-101592" is "Ready"
	I1027 18:58:32.272590  268639 node_ready.go:38] duration metric: took 40.110888336s for node "addons-101592" to be "Ready" ...
	I1027 18:58:32.272619  268639 api_server.go:52] waiting for apiserver process to appear ...
	I1027 18:58:32.272707  268639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 18:58:32.292157  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:32.304997  268639 api_server.go:72] duration metric: took 42.453227168s to wait for apiserver process to appear ...
	I1027 18:58:32.305066  268639 api_server.go:88] waiting for apiserver healthz status ...
	I1027 18:58:32.305103  268639 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1027 18:58:32.353369  268639 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1027 18:58:32.354965  268639 api_server.go:141] control plane version: v1.34.1
	I1027 18:58:32.355051  268639 api_server.go:131] duration metric: took 49.96197ms to wait for apiserver health ...
	I1027 18:58:32.355074  268639 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 18:58:32.371510  268639 system_pods.go:59] 19 kube-system pods found
	I1027 18:58:32.371599  268639 system_pods.go:61] "coredns-66bc5c9577-kbgn5" [1991cfb9-3c4d-4617-bbae-e9323fb13c40] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 18:58:32.371623  268639 system_pods.go:61] "csi-hostpath-attacher-0" [51a4bf95-11c1-4dfb-a24d-3b612848e249] Pending
	I1027 18:58:32.371663  268639 system_pods.go:61] "csi-hostpath-resizer-0" [030a7823-ab0b-438e-a310-b5500db7434c] Pending
	I1027 18:58:32.371691  268639 system_pods.go:61] "csi-hostpathplugin-42bzh" [dfa2ad2c-a861-4699-851d-80b10c2a11b4] Pending
	I1027 18:58:32.371716  268639 system_pods.go:61] "etcd-addons-101592" [22ba28c5-70eb-44bb-bc1c-99fdfacd3b07] Running
	I1027 18:58:32.371742  268639 system_pods.go:61] "kindnet-87t7g" [2787c598-3064-4a7c-89a0-28d581264bc7] Running
	I1027 18:58:32.371774  268639 system_pods.go:61] "kube-apiserver-addons-101592" [24a55c4e-9f3b-416d-bfed-a9a74b7b1210] Running
	I1027 18:58:32.371801  268639 system_pods.go:61] "kube-controller-manager-addons-101592" [bd4d474e-49ea-4c7d-97a1-71dfcdae46c8] Running
	I1027 18:58:32.371827  268639 system_pods.go:61] "kube-ingress-dns-minikube" [5be647b4-394d-4537-a6b1-82da5122c552] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1027 18:58:32.371850  268639 system_pods.go:61] "kube-proxy-k9g92" [e9916ac3-9031-44fd-a0ed-d6e5647234b6] Running
	I1027 18:58:32.371887  268639 system_pods.go:61] "kube-scheduler-addons-101592" [cff17c74-74e7-40a5-9cf6-91c4fb0684fd] Running
	I1027 18:58:32.371916  268639 system_pods.go:61] "metrics-server-85b7d694d7-mmqw2" [38c42c1b-338f-4857-a519-d3a5ae92f76f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 18:58:32.371939  268639 system_pods.go:61] "nvidia-device-plugin-daemonset-sghjb" [4b33da98-82a0-4a8e-aa0c-c6cf4878e558] Pending
	I1027 18:58:32.371960  268639 system_pods.go:61] "registry-6b586f9694-jvgtv" [590c8e4c-843c-4795-a036-d9969aaa1662] Pending
	I1027 18:58:32.371995  268639 system_pods.go:61] "registry-creds-764b6fb674-fz96k" [aca5b0e8-8150-49b6-a3e8-536f93b6d0fe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 18:58:32.372022  268639 system_pods.go:61] "registry-proxy-k87sb" [459056d8-2856-482a-8f3a-86430bc52e0e] Pending
	I1027 18:58:32.372051  268639 system_pods.go:61] "snapshot-controller-7d9fbc56b8-pqsgt" [c0a49fc2-a5cb-4b90-8e22-c6bfba73b976] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:58:32.372078  268639 system_pods.go:61] "snapshot-controller-7d9fbc56b8-pvz49" [abc88a48-3d78-49ca-9bbc-a47d33c52228] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:58:32.372113  268639 system_pods.go:61] "storage-provisioner" [dcd0674c-1f9d-42a7-acf7-06c97b686cf1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 18:58:32.372146  268639 system_pods.go:74] duration metric: took 17.048464ms to wait for pod list to return data ...
	I1027 18:58:32.372171  268639 default_sa.go:34] waiting for default service account to be created ...
	I1027 18:58:32.380679  268639 default_sa.go:45] found service account: "default"
	I1027 18:58:32.380758  268639 default_sa.go:55] duration metric: took 8.564235ms for default service account to be created ...
	I1027 18:58:32.380785  268639 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 18:58:32.404004  268639 system_pods.go:86] 19 kube-system pods found
	I1027 18:58:32.404093  268639 system_pods.go:89] "coredns-66bc5c9577-kbgn5" [1991cfb9-3c4d-4617-bbae-e9323fb13c40] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 18:58:32.404120  268639 system_pods.go:89] "csi-hostpath-attacher-0" [51a4bf95-11c1-4dfb-a24d-3b612848e249] Pending
	I1027 18:58:32.404157  268639 system_pods.go:89] "csi-hostpath-resizer-0" [030a7823-ab0b-438e-a310-b5500db7434c] Pending
	I1027 18:58:32.404183  268639 system_pods.go:89] "csi-hostpathplugin-42bzh" [dfa2ad2c-a861-4699-851d-80b10c2a11b4] Pending
	I1027 18:58:32.404208  268639 system_pods.go:89] "etcd-addons-101592" [22ba28c5-70eb-44bb-bc1c-99fdfacd3b07] Running
	I1027 18:58:32.404235  268639 system_pods.go:89] "kindnet-87t7g" [2787c598-3064-4a7c-89a0-28d581264bc7] Running
	I1027 18:58:32.404269  268639 system_pods.go:89] "kube-apiserver-addons-101592" [24a55c4e-9f3b-416d-bfed-a9a74b7b1210] Running
	I1027 18:58:32.404300  268639 system_pods.go:89] "kube-controller-manager-addons-101592" [bd4d474e-49ea-4c7d-97a1-71dfcdae46c8] Running
	I1027 18:58:32.404325  268639 system_pods.go:89] "kube-ingress-dns-minikube" [5be647b4-394d-4537-a6b1-82da5122c552] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1027 18:58:32.404387  268639 system_pods.go:89] "kube-proxy-k9g92" [e9916ac3-9031-44fd-a0ed-d6e5647234b6] Running
	I1027 18:58:32.404417  268639 system_pods.go:89] "kube-scheduler-addons-101592" [cff17c74-74e7-40a5-9cf6-91c4fb0684fd] Running
	I1027 18:58:32.404441  268639 system_pods.go:89] "metrics-server-85b7d694d7-mmqw2" [38c42c1b-338f-4857-a519-d3a5ae92f76f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 18:58:32.404463  268639 system_pods.go:89] "nvidia-device-plugin-daemonset-sghjb" [4b33da98-82a0-4a8e-aa0c-c6cf4878e558] Pending
	I1027 18:58:32.404500  268639 system_pods.go:89] "registry-6b586f9694-jvgtv" [590c8e4c-843c-4795-a036-d9969aaa1662] Pending
	I1027 18:58:32.404526  268639 system_pods.go:89] "registry-creds-764b6fb674-fz96k" [aca5b0e8-8150-49b6-a3e8-536f93b6d0fe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 18:58:32.404548  268639 system_pods.go:89] "registry-proxy-k87sb" [459056d8-2856-482a-8f3a-86430bc52e0e] Pending
	I1027 18:58:32.404572  268639 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pqsgt" [c0a49fc2-a5cb-4b90-8e22-c6bfba73b976] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:58:32.404613  268639 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pvz49" [abc88a48-3d78-49ca-9bbc-a47d33c52228] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:58:32.404642  268639 system_pods.go:89] "storage-provisioner" [dcd0674c-1f9d-42a7-acf7-06c97b686cf1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 18:58:32.404675  268639 retry.go:31] will retry after 275.776227ms: missing components: kube-dns
	I1027 18:58:32.518879  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:32.648339  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:32.648525  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:32.689679  268639 system_pods.go:86] 19 kube-system pods found
	I1027 18:58:32.689770  268639 system_pods.go:89] "coredns-66bc5c9577-kbgn5" [1991cfb9-3c4d-4617-bbae-e9323fb13c40] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 18:58:32.689796  268639 system_pods.go:89] "csi-hostpath-attacher-0" [51a4bf95-11c1-4dfb-a24d-3b612848e249] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1027 18:58:32.689839  268639 system_pods.go:89] "csi-hostpath-resizer-0" [030a7823-ab0b-438e-a310-b5500db7434c] Pending
	I1027 18:58:32.689870  268639 system_pods.go:89] "csi-hostpathplugin-42bzh" [dfa2ad2c-a861-4699-851d-80b10c2a11b4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1027 18:58:32.689892  268639 system_pods.go:89] "etcd-addons-101592" [22ba28c5-70eb-44bb-bc1c-99fdfacd3b07] Running
	I1027 18:58:32.689916  268639 system_pods.go:89] "kindnet-87t7g" [2787c598-3064-4a7c-89a0-28d581264bc7] Running
	I1027 18:58:32.689950  268639 system_pods.go:89] "kube-apiserver-addons-101592" [24a55c4e-9f3b-416d-bfed-a9a74b7b1210] Running
	I1027 18:58:32.689979  268639 system_pods.go:89] "kube-controller-manager-addons-101592" [bd4d474e-49ea-4c7d-97a1-71dfcdae46c8] Running
	I1027 18:58:32.690005  268639 system_pods.go:89] "kube-ingress-dns-minikube" [5be647b4-394d-4537-a6b1-82da5122c552] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1027 18:58:32.690026  268639 system_pods.go:89] "kube-proxy-k9g92" [e9916ac3-9031-44fd-a0ed-d6e5647234b6] Running
	I1027 18:58:32.690062  268639 system_pods.go:89] "kube-scheduler-addons-101592" [cff17c74-74e7-40a5-9cf6-91c4fb0684fd] Running
	I1027 18:58:32.690092  268639 system_pods.go:89] "metrics-server-85b7d694d7-mmqw2" [38c42c1b-338f-4857-a519-d3a5ae92f76f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 18:58:32.690121  268639 system_pods.go:89] "nvidia-device-plugin-daemonset-sghjb" [4b33da98-82a0-4a8e-aa0c-c6cf4878e558] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1027 18:58:32.690171  268639 system_pods.go:89] "registry-6b586f9694-jvgtv" [590c8e4c-843c-4795-a036-d9969aaa1662] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1027 18:58:32.690199  268639 system_pods.go:89] "registry-creds-764b6fb674-fz96k" [aca5b0e8-8150-49b6-a3e8-536f93b6d0fe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 18:58:32.690222  268639 system_pods.go:89] "registry-proxy-k87sb" [459056d8-2856-482a-8f3a-86430bc52e0e] Pending
	I1027 18:58:32.690246  268639 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pqsgt" [c0a49fc2-a5cb-4b90-8e22-c6bfba73b976] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:58:32.690281  268639 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pvz49" [abc88a48-3d78-49ca-9bbc-a47d33c52228] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:58:32.690315  268639 system_pods.go:89] "storage-provisioner" [dcd0674c-1f9d-42a7-acf7-06c97b686cf1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 18:58:32.690350  268639 retry.go:31] will retry after 352.092ms: missing components: kube-dns
	I1027 18:58:32.740054  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:33.002624  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:33.047909  268639 system_pods.go:86] 19 kube-system pods found
	I1027 18:58:33.048009  268639 system_pods.go:89] "coredns-66bc5c9577-kbgn5" [1991cfb9-3c4d-4617-bbae-e9323fb13c40] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 18:58:33.048027  268639 system_pods.go:89] "csi-hostpath-attacher-0" [51a4bf95-11c1-4dfb-a24d-3b612848e249] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1027 18:58:33.048036  268639 system_pods.go:89] "csi-hostpath-resizer-0" [030a7823-ab0b-438e-a310-b5500db7434c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1027 18:58:33.048069  268639 system_pods.go:89] "csi-hostpathplugin-42bzh" [dfa2ad2c-a861-4699-851d-80b10c2a11b4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1027 18:58:33.048083  268639 system_pods.go:89] "etcd-addons-101592" [22ba28c5-70eb-44bb-bc1c-99fdfacd3b07] Running
	I1027 18:58:33.048091  268639 system_pods.go:89] "kindnet-87t7g" [2787c598-3064-4a7c-89a0-28d581264bc7] Running
	I1027 18:58:33.048096  268639 system_pods.go:89] "kube-apiserver-addons-101592" [24a55c4e-9f3b-416d-bfed-a9a74b7b1210] Running
	I1027 18:58:33.048107  268639 system_pods.go:89] "kube-controller-manager-addons-101592" [bd4d474e-49ea-4c7d-97a1-71dfcdae46c8] Running
	I1027 18:58:33.048114  268639 system_pods.go:89] "kube-ingress-dns-minikube" [5be647b4-394d-4537-a6b1-82da5122c552] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1027 18:58:33.048119  268639 system_pods.go:89] "kube-proxy-k9g92" [e9916ac3-9031-44fd-a0ed-d6e5647234b6] Running
	I1027 18:58:33.048143  268639 system_pods.go:89] "kube-scheduler-addons-101592" [cff17c74-74e7-40a5-9cf6-91c4fb0684fd] Running
	I1027 18:58:33.048161  268639 system_pods.go:89] "metrics-server-85b7d694d7-mmqw2" [38c42c1b-338f-4857-a519-d3a5ae92f76f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 18:58:33.048168  268639 system_pods.go:89] "nvidia-device-plugin-daemonset-sghjb" [4b33da98-82a0-4a8e-aa0c-c6cf4878e558] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1027 18:58:33.048181  268639 system_pods.go:89] "registry-6b586f9694-jvgtv" [590c8e4c-843c-4795-a036-d9969aaa1662] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1027 18:58:33.048189  268639 system_pods.go:89] "registry-creds-764b6fb674-fz96k" [aca5b0e8-8150-49b6-a3e8-536f93b6d0fe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 18:58:33.048197  268639 system_pods.go:89] "registry-proxy-k87sb" [459056d8-2856-482a-8f3a-86430bc52e0e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1027 18:58:33.048203  268639 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pqsgt" [c0a49fc2-a5cb-4b90-8e22-c6bfba73b976] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:58:33.048227  268639 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pvz49" [abc88a48-3d78-49ca-9bbc-a47d33c52228] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:58:33.048241  268639 system_pods.go:89] "storage-provisioner" [dcd0674c-1f9d-42a7-acf7-06c97b686cf1] Running
	I1027 18:58:33.048268  268639 retry.go:31] will retry after 469.239154ms: missing components: kube-dns
	I1027 18:58:33.123157  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:33.123329  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:33.238704  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:33.503304  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:33.522377  268639 system_pods.go:86] 19 kube-system pods found
	I1027 18:58:33.522413  268639 system_pods.go:89] "coredns-66bc5c9577-kbgn5" [1991cfb9-3c4d-4617-bbae-e9323fb13c40] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 18:58:33.522424  268639 system_pods.go:89] "csi-hostpath-attacher-0" [51a4bf95-11c1-4dfb-a24d-3b612848e249] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1027 18:58:33.522431  268639 system_pods.go:89] "csi-hostpath-resizer-0" [030a7823-ab0b-438e-a310-b5500db7434c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1027 18:58:33.522437  268639 system_pods.go:89] "csi-hostpathplugin-42bzh" [dfa2ad2c-a861-4699-851d-80b10c2a11b4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1027 18:58:33.522444  268639 system_pods.go:89] "etcd-addons-101592" [22ba28c5-70eb-44bb-bc1c-99fdfacd3b07] Running
	I1027 18:58:33.522450  268639 system_pods.go:89] "kindnet-87t7g" [2787c598-3064-4a7c-89a0-28d581264bc7] Running
	I1027 18:58:33.522454  268639 system_pods.go:89] "kube-apiserver-addons-101592" [24a55c4e-9f3b-416d-bfed-a9a74b7b1210] Running
	I1027 18:58:33.522459  268639 system_pods.go:89] "kube-controller-manager-addons-101592" [bd4d474e-49ea-4c7d-97a1-71dfcdae46c8] Running
	I1027 18:58:33.522465  268639 system_pods.go:89] "kube-ingress-dns-minikube" [5be647b4-394d-4537-a6b1-82da5122c552] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1027 18:58:33.522478  268639 system_pods.go:89] "kube-proxy-k9g92" [e9916ac3-9031-44fd-a0ed-d6e5647234b6] Running
	I1027 18:58:33.522482  268639 system_pods.go:89] "kube-scheduler-addons-101592" [cff17c74-74e7-40a5-9cf6-91c4fb0684fd] Running
	I1027 18:58:33.522489  268639 system_pods.go:89] "metrics-server-85b7d694d7-mmqw2" [38c42c1b-338f-4857-a519-d3a5ae92f76f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 18:58:33.522504  268639 system_pods.go:89] "nvidia-device-plugin-daemonset-sghjb" [4b33da98-82a0-4a8e-aa0c-c6cf4878e558] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1027 18:58:33.522510  268639 system_pods.go:89] "registry-6b586f9694-jvgtv" [590c8e4c-843c-4795-a036-d9969aaa1662] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1027 18:58:33.522516  268639 system_pods.go:89] "registry-creds-764b6fb674-fz96k" [aca5b0e8-8150-49b6-a3e8-536f93b6d0fe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 18:58:33.522528  268639 system_pods.go:89] "registry-proxy-k87sb" [459056d8-2856-482a-8f3a-86430bc52e0e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1027 18:58:33.522534  268639 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pqsgt" [c0a49fc2-a5cb-4b90-8e22-c6bfba73b976] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:58:33.522541  268639 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pvz49" [abc88a48-3d78-49ca-9bbc-a47d33c52228] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:58:33.522548  268639 system_pods.go:89] "storage-provisioner" [dcd0674c-1f9d-42a7-acf7-06c97b686cf1] Running
	I1027 18:58:33.522564  268639 retry.go:31] will retry after 449.494258ms: missing components: kube-dns
	I1027 18:58:33.621134  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:33.623101  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:33.737453  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:33.977517  268639 system_pods.go:86] 19 kube-system pods found
	I1027 18:58:33.977554  268639 system_pods.go:89] "coredns-66bc5c9577-kbgn5" [1991cfb9-3c4d-4617-bbae-e9323fb13c40] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 18:58:33.977563  268639 system_pods.go:89] "csi-hostpath-attacher-0" [51a4bf95-11c1-4dfb-a24d-3b612848e249] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1027 18:58:33.977570  268639 system_pods.go:89] "csi-hostpath-resizer-0" [030a7823-ab0b-438e-a310-b5500db7434c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1027 18:58:33.977576  268639 system_pods.go:89] "csi-hostpathplugin-42bzh" [dfa2ad2c-a861-4699-851d-80b10c2a11b4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1027 18:58:33.977581  268639 system_pods.go:89] "etcd-addons-101592" [22ba28c5-70eb-44bb-bc1c-99fdfacd3b07] Running
	I1027 18:58:33.977586  268639 system_pods.go:89] "kindnet-87t7g" [2787c598-3064-4a7c-89a0-28d581264bc7] Running
	I1027 18:58:33.977590  268639 system_pods.go:89] "kube-apiserver-addons-101592" [24a55c4e-9f3b-416d-bfed-a9a74b7b1210] Running
	I1027 18:58:33.977595  268639 system_pods.go:89] "kube-controller-manager-addons-101592" [bd4d474e-49ea-4c7d-97a1-71dfcdae46c8] Running
	I1027 18:58:33.977607  268639 system_pods.go:89] "kube-ingress-dns-minikube" [5be647b4-394d-4537-a6b1-82da5122c552] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1027 18:58:33.977616  268639 system_pods.go:89] "kube-proxy-k9g92" [e9916ac3-9031-44fd-a0ed-d6e5647234b6] Running
	I1027 18:58:33.977621  268639 system_pods.go:89] "kube-scheduler-addons-101592" [cff17c74-74e7-40a5-9cf6-91c4fb0684fd] Running
	I1027 18:58:33.977629  268639 system_pods.go:89] "metrics-server-85b7d694d7-mmqw2" [38c42c1b-338f-4857-a519-d3a5ae92f76f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 18:58:33.977646  268639 system_pods.go:89] "nvidia-device-plugin-daemonset-sghjb" [4b33da98-82a0-4a8e-aa0c-c6cf4878e558] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1027 18:58:33.977660  268639 system_pods.go:89] "registry-6b586f9694-jvgtv" [590c8e4c-843c-4795-a036-d9969aaa1662] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1027 18:58:33.977666  268639 system_pods.go:89] "registry-creds-764b6fb674-fz96k" [aca5b0e8-8150-49b6-a3e8-536f93b6d0fe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 18:58:33.977673  268639 system_pods.go:89] "registry-proxy-k87sb" [459056d8-2856-482a-8f3a-86430bc52e0e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1027 18:58:33.977682  268639 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pqsgt" [c0a49fc2-a5cb-4b90-8e22-c6bfba73b976] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:58:33.977688  268639 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pvz49" [abc88a48-3d78-49ca-9bbc-a47d33c52228] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:58:33.977696  268639 system_pods.go:89] "storage-provisioner" [dcd0674c-1f9d-42a7-acf7-06c97b686cf1] Running
	I1027 18:58:33.977710  268639 retry.go:31] will retry after 516.588235ms: missing components: kube-dns
	I1027 18:58:34.002832  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:34.123184  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:34.123529  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:34.238121  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:34.500420  268639 system_pods.go:86] 19 kube-system pods found
	I1027 18:58:34.500514  268639 system_pods.go:89] "coredns-66bc5c9577-kbgn5" [1991cfb9-3c4d-4617-bbae-e9323fb13c40] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 18:58:34.500540  268639 system_pods.go:89] "csi-hostpath-attacher-0" [51a4bf95-11c1-4dfb-a24d-3b612848e249] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1027 18:58:34.500591  268639 system_pods.go:89] "csi-hostpath-resizer-0" [030a7823-ab0b-438e-a310-b5500db7434c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1027 18:58:34.500627  268639 system_pods.go:89] "csi-hostpathplugin-42bzh" [dfa2ad2c-a861-4699-851d-80b10c2a11b4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1027 18:58:34.500665  268639 system_pods.go:89] "etcd-addons-101592" [22ba28c5-70eb-44bb-bc1c-99fdfacd3b07] Running
	I1027 18:58:34.500696  268639 system_pods.go:89] "kindnet-87t7g" [2787c598-3064-4a7c-89a0-28d581264bc7] Running
	I1027 18:58:34.500719  268639 system_pods.go:89] "kube-apiserver-addons-101592" [24a55c4e-9f3b-416d-bfed-a9a74b7b1210] Running
	I1027 18:58:34.500757  268639 system_pods.go:89] "kube-controller-manager-addons-101592" [bd4d474e-49ea-4c7d-97a1-71dfcdae46c8] Running
	I1027 18:58:34.500785  268639 system_pods.go:89] "kube-ingress-dns-minikube" [5be647b4-394d-4537-a6b1-82da5122c552] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1027 18:58:34.500806  268639 system_pods.go:89] "kube-proxy-k9g92" [e9916ac3-9031-44fd-a0ed-d6e5647234b6] Running
	I1027 18:58:34.500846  268639 system_pods.go:89] "kube-scheduler-addons-101592" [cff17c74-74e7-40a5-9cf6-91c4fb0684fd] Running
	I1027 18:58:34.500873  268639 system_pods.go:89] "metrics-server-85b7d694d7-mmqw2" [38c42c1b-338f-4857-a519-d3a5ae92f76f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 18:58:34.500903  268639 system_pods.go:89] "nvidia-device-plugin-daemonset-sghjb" [4b33da98-82a0-4a8e-aa0c-c6cf4878e558] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1027 18:58:34.500936  268639 system_pods.go:89] "registry-6b586f9694-jvgtv" [590c8e4c-843c-4795-a036-d9969aaa1662] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1027 18:58:34.500964  268639 system_pods.go:89] "registry-creds-764b6fb674-fz96k" [aca5b0e8-8150-49b6-a3e8-536f93b6d0fe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 18:58:34.500986  268639 system_pods.go:89] "registry-proxy-k87sb" [459056d8-2856-482a-8f3a-86430bc52e0e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1027 18:58:34.501024  268639 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pqsgt" [c0a49fc2-a5cb-4b90-8e22-c6bfba73b976] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:58:34.501051  268639 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pvz49" [abc88a48-3d78-49ca-9bbc-a47d33c52228] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:58:34.501071  268639 system_pods.go:89] "storage-provisioner" [dcd0674c-1f9d-42a7-acf7-06c97b686cf1] Running
	I1027 18:58:34.501119  268639 retry.go:31] will retry after 573.759707ms: missing components: kube-dns
	I1027 18:58:34.505628  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:34.625365  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:34.647287  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:34.740252  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:35.006570  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:35.084254  268639 system_pods.go:86] 19 kube-system pods found
	I1027 18:58:35.084345  268639 system_pods.go:89] "coredns-66bc5c9577-kbgn5" [1991cfb9-3c4d-4617-bbae-e9323fb13c40] Running
	I1027 18:58:35.084379  268639 system_pods.go:89] "csi-hostpath-attacher-0" [51a4bf95-11c1-4dfb-a24d-3b612848e249] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1027 18:58:35.084433  268639 system_pods.go:89] "csi-hostpath-resizer-0" [030a7823-ab0b-438e-a310-b5500db7434c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1027 18:58:35.084473  268639 system_pods.go:89] "csi-hostpathplugin-42bzh" [dfa2ad2c-a861-4699-851d-80b10c2a11b4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1027 18:58:35.084494  268639 system_pods.go:89] "etcd-addons-101592" [22ba28c5-70eb-44bb-bc1c-99fdfacd3b07] Running
	I1027 18:58:35.084516  268639 system_pods.go:89] "kindnet-87t7g" [2787c598-3064-4a7c-89a0-28d581264bc7] Running
	I1027 18:58:35.084548  268639 system_pods.go:89] "kube-apiserver-addons-101592" [24a55c4e-9f3b-416d-bfed-a9a74b7b1210] Running
	I1027 18:58:35.084572  268639 system_pods.go:89] "kube-controller-manager-addons-101592" [bd4d474e-49ea-4c7d-97a1-71dfcdae46c8] Running
	I1027 18:58:35.084605  268639 system_pods.go:89] "kube-ingress-dns-minikube" [5be647b4-394d-4537-a6b1-82da5122c552] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1027 18:58:35.084632  268639 system_pods.go:89] "kube-proxy-k9g92" [e9916ac3-9031-44fd-a0ed-d6e5647234b6] Running
	I1027 18:58:35.084661  268639 system_pods.go:89] "kube-scheduler-addons-101592" [cff17c74-74e7-40a5-9cf6-91c4fb0684fd] Running
	I1027 18:58:35.084689  268639 system_pods.go:89] "metrics-server-85b7d694d7-mmqw2" [38c42c1b-338f-4857-a519-d3a5ae92f76f] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 18:58:35.084714  268639 system_pods.go:89] "nvidia-device-plugin-daemonset-sghjb" [4b33da98-82a0-4a8e-aa0c-c6cf4878e558] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1027 18:58:35.084738  268639 system_pods.go:89] "registry-6b586f9694-jvgtv" [590c8e4c-843c-4795-a036-d9969aaa1662] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1027 18:58:35.084773  268639 system_pods.go:89] "registry-creds-764b6fb674-fz96k" [aca5b0e8-8150-49b6-a3e8-536f93b6d0fe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 18:58:35.084799  268639 system_pods.go:89] "registry-proxy-k87sb" [459056d8-2856-482a-8f3a-86430bc52e0e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1027 18:58:35.084823  268639 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pqsgt" [c0a49fc2-a5cb-4b90-8e22-c6bfba73b976] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:58:35.084880  268639 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pvz49" [abc88a48-3d78-49ca-9bbc-a47d33c52228] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:58:35.084909  268639 system_pods.go:89] "storage-provisioner" [dcd0674c-1f9d-42a7-acf7-06c97b686cf1] Running
	I1027 18:58:35.084938  268639 system_pods.go:126] duration metric: took 2.704131163s to wait for k8s-apps to be running ...
	I1027 18:58:35.084961  268639 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 18:58:35.085073  268639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 18:58:35.104105  268639 system_svc.go:56] duration metric: took 19.135873ms WaitForService to wait for kubelet
	I1027 18:58:35.104188  268639 kubeadm.go:586] duration metric: took 45.252422244s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 18:58:35.104242  268639 node_conditions.go:102] verifying NodePressure condition ...
	I1027 18:58:35.108316  268639 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 18:58:35.108396  268639 node_conditions.go:123] node cpu capacity is 2
	I1027 18:58:35.108424  268639 node_conditions.go:105] duration metric: took 4.16068ms to run NodePressure ...
	I1027 18:58:35.108450  268639 start.go:241] waiting for startup goroutines ...
	I1027 18:58:35.121110  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:35.129418  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:35.237515  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:35.503236  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:35.622730  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:35.623002  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:35.739993  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:36.006327  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:36.123166  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:36.125423  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:36.244853  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:36.503443  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:36.622746  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:36.622806  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:36.737284  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:37.003859  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:37.122776  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:37.123983  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:37.231121  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:58:37.241340  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:37.503270  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:37.621177  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:37.626124  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:37.737635  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:38.003855  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:38.123088  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:38.123511  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:38.236766  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:38.317937  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.086776419s)
	W1027 18:58:38.317975  268639 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:38.317993  268639 retry.go:31] will retry after 9.298329959s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:38.503167  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:38.620406  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:38.622229  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:38.737515  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:39.002648  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:39.121039  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:39.123512  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:39.238024  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:39.503347  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:39.620720  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:39.622522  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:39.736705  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:40.003612  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:40.125022  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:40.125385  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:40.238094  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:40.503299  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:40.621016  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:40.622634  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:40.736735  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:41.003277  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:41.123604  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:41.123818  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:41.236827  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:41.503630  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:41.622205  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:41.623922  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:41.737204  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:42.003014  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:42.130456  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:42.132282  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:42.242949  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:42.503682  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:42.620634  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:42.623627  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:42.736899  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:43.004229  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:43.121956  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:43.123075  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:43.237353  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:43.502499  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:43.620910  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:43.622975  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:43.737022  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:44.003612  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:44.121432  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:44.124104  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:44.237218  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:44.503319  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:44.621241  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:44.624174  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:44.737685  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:45.003950  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:45.126736  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:45.127218  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:45.243957  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:45.504907  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:45.622408  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:45.624111  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:45.737371  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:46.002917  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:46.121886  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:46.124465  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:46.261991  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:46.503723  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:46.622486  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:46.624234  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:46.737283  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:47.003384  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:47.124006  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:47.124140  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:47.237561  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:47.504007  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:47.617377  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:58:47.629430  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:47.629561  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:47.743057  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:48.003548  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:48.124261  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:48.124962  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:48.237208  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:48.506034  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:48.624833  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:48.625573  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:48.738521  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:49.012149  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:49.014283  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.396858118s)
	W1027 18:58:49.014325  268639 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:49.014356  268639 retry.go:31] will retry after 21.94994504s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:49.122338  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:49.124691  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:49.236766  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:49.503178  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:49.621699  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:49.624252  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:49.738065  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:50.004211  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:50.123790  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:50.124077  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:50.237194  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:50.503732  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:50.621858  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:50.624515  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:50.737766  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:51.003575  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:51.121277  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:51.123710  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:51.237365  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:51.503392  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:51.621652  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:51.623833  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:51.736972  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:52.003322  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:52.123170  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:52.123552  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:52.240232  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:52.503879  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:52.621215  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:52.624432  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:52.738161  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:53.003756  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:53.121521  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:53.123016  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:53.237018  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:53.503796  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:53.622174  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:53.623213  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:53.737557  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:54.002551  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:54.120773  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:54.122647  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:54.236563  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:54.503417  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:54.623848  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:54.624134  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:54.737363  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:55.006046  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:55.123764  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:55.125922  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:55.237293  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:55.503530  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:55.621094  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:55.622665  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:55.736330  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:56.002512  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:56.121453  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:56.122901  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:56.241371  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:56.503372  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:56.620387  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:56.622526  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:56.737313  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:57.002956  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:57.121111  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:57.123387  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:57.237443  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:57.503516  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:57.620770  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:57.623222  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:57.737281  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:58.002649  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:58.121233  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:58.123226  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:58.237403  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:58.503576  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:58.622889  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:58.623084  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:58.737449  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:59.005613  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:59.123113  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:59.123205  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:59.238384  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:59.502631  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:59.620316  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:59.622490  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:59.737346  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:00.003517  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:00.126218  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:00.143522  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:00.239448  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:00.504517  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:00.626883  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:00.627080  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:00.737151  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:01.018332  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:01.121933  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:01.123075  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:01.238285  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:01.502439  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:01.621585  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:01.622323  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:01.737577  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:02.009982  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:02.123292  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:02.124010  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:02.237673  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:02.503413  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:02.620551  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:02.622656  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:02.736827  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:03.010105  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:03.121416  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:03.121774  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:03.236874  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:03.504683  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:03.621404  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:03.623119  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:03.737320  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:04.003685  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:04.126370  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:04.126604  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:04.238384  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:04.503155  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:04.620629  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:04.623026  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:04.737190  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:05.003651  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:05.132803  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:05.133503  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:05.238182  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:05.502334  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:05.620618  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:05.622523  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:05.736604  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:06.003403  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:06.120645  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:06.124115  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:06.240709  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:06.504024  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:06.623214  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:06.623529  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:06.737625  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:07.003602  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:07.121158  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:07.123763  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:07.236974  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:07.503457  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:07.621848  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:07.623617  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:07.736759  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:08.003838  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:08.121431  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:08.123991  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:08.237069  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:08.503649  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:08.621764  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:08.623456  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:08.737621  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:09.002976  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:09.122662  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:09.123816  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:09.236926  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:09.503348  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:09.621489  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:09.622876  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:09.736854  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:10.004153  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:10.123825  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:10.124226  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:10.237463  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:10.503282  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:10.621629  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:10.631081  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:10.737496  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:10.964755  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:59:11.009371  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:11.124476  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:11.124730  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:11.237008  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:11.503550  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:11.623382  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:11.623775  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:11.737117  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:12.003270  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:12.121786  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:12.124663  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:12.161451  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.196660594s)
	W1027 18:59:12.161532  268639 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:59:12.161569  268639 retry.go:31] will retry after 25.286914289s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:59:12.237638  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:12.505610  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:12.621047  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:12.623573  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:12.737789  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:13.009153  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:13.122470  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:13.124083  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:13.237532  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:13.502802  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:13.622917  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:13.625258  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:13.736990  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:14.003554  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:14.121116  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:14.123360  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:14.237328  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:14.502614  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:14.621540  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:14.623963  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:14.736967  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:15.004451  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:15.121731  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:15.123677  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:15.236991  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:15.504865  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:15.620900  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:15.622697  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:15.737035  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:16.003781  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:16.121554  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:16.124277  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:16.243612  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:16.504072  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:16.623667  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:16.623892  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:16.737769  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:17.003667  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:17.121107  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:17.122916  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:17.255640  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:17.504064  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:17.622939  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:17.624170  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:17.737300  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:18.002818  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:18.120967  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:18.123406  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:18.247849  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:18.503272  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:18.620545  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:18.622217  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:18.737137  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:19.002373  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:19.121395  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:19.122641  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:19.241526  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:19.502663  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:19.621471  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:19.622373  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:19.737915  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:20.003606  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:20.123253  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:20.124110  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:20.237779  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:20.504332  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:20.623494  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:20.623974  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:20.737036  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:21.003683  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:21.128329  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:21.129155  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:21.236824  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:21.503641  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:21.620665  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:21.623299  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:21.737168  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:22.004411  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:22.121005  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:22.123908  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:22.237477  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:22.503807  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:22.621459  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:22.623935  268639 kapi.go:107] duration metric: took 1m26.50452032s to wait for kubernetes.io/minikube-addons=registry ...
	I1027 18:59:22.737154  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:23.003005  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:23.121733  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:23.236944  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:23.504135  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:23.622177  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:23.737221  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:24.002851  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:24.121908  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:24.237505  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:24.503568  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:24.621187  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:24.736997  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:25.004293  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:25.121918  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:25.237118  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:25.502927  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:25.622070  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:25.737210  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:26.002327  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:26.120704  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:26.239004  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:26.503611  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:26.624165  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:26.737310  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:27.085742  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:27.128727  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:27.238432  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:27.502237  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:27.621166  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:27.737851  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:28.003387  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:28.120812  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:28.236605  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:28.503442  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:28.621334  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:28.737447  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:29.004017  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:29.121128  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:29.237136  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:29.510363  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:29.624337  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:29.737349  268639 kapi.go:107] duration metric: took 1m30.003707646s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1027 18:59:29.740733  268639 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-101592 cluster.
	I1027 18:59:29.743684  268639 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1027 18:59:29.746612  268639 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1027 18:59:30.002824  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:30.121346  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:30.509877  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:30.621694  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:31.003698  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:31.121365  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:31.504136  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:31.621211  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:32.003679  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:32.120764  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:32.503268  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:32.622445  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:33.003347  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:33.121470  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:33.504211  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:33.620523  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:34.004366  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:34.121315  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:34.503605  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:34.620538  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:35.002359  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:35.120651  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:35.504966  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:35.626680  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:36.003745  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:36.121388  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:36.504698  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:36.621016  268639 kapi.go:107] duration metric: took 1m40.503694022s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1027 18:59:37.099882  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:37.449185  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:59:37.503345  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:38.002968  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:38.503799  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:38.510392  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.061167706s)
	W1027 18:59:38.510451  268639 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1027 18:59:38.510529  268639 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1027 18:59:39.003834  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:39.503594  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:40.005297  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:40.506114  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:41.003798  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:41.504217  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:42.002961  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:42.502779  268639 kapi.go:107] duration metric: took 1m46.003408719s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1027 18:59:42.505928  268639 out.go:179] * Enabled addons: registry-creds, amd-gpu-device-plugin, ingress-dns, cloud-spanner, nvidia-device-plugin, storage-provisioner, metrics-server, yakd, default-storageclass, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1027 18:59:42.508700  268639 addons.go:514] duration metric: took 1m52.656495526s for enable addons: enabled=[registry-creds amd-gpu-device-plugin ingress-dns cloud-spanner nvidia-device-plugin storage-provisioner metrics-server yakd default-storageclass volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1027 18:59:42.508755  268639 start.go:246] waiting for cluster config update ...
	I1027 18:59:42.508775  268639 start.go:255] writing updated cluster config ...
	I1027 18:59:42.509073  268639 ssh_runner.go:195] Run: rm -f paused
	I1027 18:59:42.512552  268639 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 18:59:42.516796  268639 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kbgn5" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:59:42.524196  268639 pod_ready.go:94] pod "coredns-66bc5c9577-kbgn5" is "Ready"
	I1027 18:59:42.524226  268639 pod_ready.go:86] duration metric: took 7.398357ms for pod "coredns-66bc5c9577-kbgn5" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:59:42.526848  268639 pod_ready.go:83] waiting for pod "etcd-addons-101592" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:59:42.532751  268639 pod_ready.go:94] pod "etcd-addons-101592" is "Ready"
	I1027 18:59:42.532779  268639 pod_ready.go:86] duration metric: took 5.906067ms for pod "etcd-addons-101592" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:59:42.535544  268639 pod_ready.go:83] waiting for pod "kube-apiserver-addons-101592" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:59:42.539992  268639 pod_ready.go:94] pod "kube-apiserver-addons-101592" is "Ready"
	I1027 18:59:42.540067  268639 pod_ready.go:86] duration metric: took 4.493744ms for pod "kube-apiserver-addons-101592" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:59:42.542553  268639 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-101592" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:59:42.917668  268639 pod_ready.go:94] pod "kube-controller-manager-addons-101592" is "Ready"
	I1027 18:59:42.917699  268639 pod_ready.go:86] duration metric: took 375.12387ms for pod "kube-controller-manager-addons-101592" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:59:43.117876  268639 pod_ready.go:83] waiting for pod "kube-proxy-k9g92" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:59:43.517231  268639 pod_ready.go:94] pod "kube-proxy-k9g92" is "Ready"
	I1027 18:59:43.517263  268639 pod_ready.go:86] duration metric: took 399.358384ms for pod "kube-proxy-k9g92" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:59:43.717667  268639 pod_ready.go:83] waiting for pod "kube-scheduler-addons-101592" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:59:44.117783  268639 pod_ready.go:94] pod "kube-scheduler-addons-101592" is "Ready"
	I1027 18:59:44.117812  268639 pod_ready.go:86] duration metric: took 400.115962ms for pod "kube-scheduler-addons-101592" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:59:44.117826  268639 pod_ready.go:40] duration metric: took 1.605245188s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 18:59:44.168647  268639 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 18:59:44.171880  268639 out.go:179] * Done! kubectl is now configured to use "addons-101592" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 27 19:02:41 addons-101592 crio[831]: time="2025-10-27T19:02:41.887262842Z" level=info msg="Running pod sandbox: kube-system/registry-creds-764b6fb674-fz96k/POD" id=50b89632-c295-405c-b5b0-614814e5231c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 19:02:41 addons-101592 crio[831]: time="2025-10-27T19:02:41.887360725Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:02:41 addons-101592 crio[831]: time="2025-10-27T19:02:41.905306596Z" level=info msg="Got pod network &{Name:registry-creds-764b6fb674-fz96k Namespace:kube-system ID:cd8189a7bc95418b6b6b18946a76b003647c06ab1e233077a800061341b6ca26 UID:aca5b0e8-8150-49b6-a3e8-536f93b6d0fe NetNS:/var/run/netns/59501542-5bdd-49e8-8821-5ba419f50381 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000078b08}] Aliases:map[]}"
	Oct 27 19:02:41 addons-101592 crio[831]: time="2025-10-27T19:02:41.905352953Z" level=info msg="Adding pod kube-system_registry-creds-764b6fb674-fz96k to CNI network \"kindnet\" (type=ptp)"
	Oct 27 19:02:41 addons-101592 crio[831]: time="2025-10-27T19:02:41.934364364Z" level=info msg="Got pod network &{Name:registry-creds-764b6fb674-fz96k Namespace:kube-system ID:cd8189a7bc95418b6b6b18946a76b003647c06ab1e233077a800061341b6ca26 UID:aca5b0e8-8150-49b6-a3e8-536f93b6d0fe NetNS:/var/run/netns/59501542-5bdd-49e8-8821-5ba419f50381 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000078b08}] Aliases:map[]}"
	Oct 27 19:02:41 addons-101592 crio[831]: time="2025-10-27T19:02:41.934518638Z" level=info msg="Checking pod kube-system_registry-creds-764b6fb674-fz96k for CNI network kindnet (type=ptp)"
	Oct 27 19:02:41 addons-101592 crio[831]: time="2025-10-27T19:02:41.94236495Z" level=info msg="Ran pod sandbox cd8189a7bc95418b6b6b18946a76b003647c06ab1e233077a800061341b6ca26 with infra container: kube-system/registry-creds-764b6fb674-fz96k/POD" id=50b89632-c295-405c-b5b0-614814e5231c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 19:02:41 addons-101592 crio[831]: time="2025-10-27T19:02:41.944887659Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=7c85c823-984a-47d3-a637-21555d99ab7c name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:02:41 addons-101592 crio[831]: time="2025-10-27T19:02:41.945183876Z" level=info msg="Image docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 not found" id=7c85c823-984a-47d3-a637-21555d99ab7c name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:02:41 addons-101592 crio[831]: time="2025-10-27T19:02:41.94531301Z" level=info msg="Neither image nor artfiact docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 found" id=7c85c823-984a-47d3-a637-21555d99ab7c name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:02:42 addons-101592 crio[831]: time="2025-10-27T19:02:42.452609372Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=03353d8e-fdcc-4a34-b735-d15fe6eb970c name=/runtime.v1.ImageService/PullImage
	Oct 27 19:02:42 addons-101592 crio[831]: time="2025-10-27T19:02:42.453637782Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=5d397502-c659-48c4-9708-e4b15d3054f2 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:02:42 addons-101592 crio[831]: time="2025-10-27T19:02:42.457884886Z" level=info msg="Pulling image: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=063dcc96-9fef-4633-92fe-6194094dcd27 name=/runtime.v1.ImageService/PullImage
	Oct 27 19:02:42 addons-101592 crio[831]: time="2025-10-27T19:02:42.462725408Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=4c36afce-f315-49a8-a419-d60cfde33e68 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:02:42 addons-101592 crio[831]: time="2025-10-27T19:02:42.465131093Z" level=info msg="Trying to access \"docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605\""
	Oct 27 19:02:42 addons-101592 crio[831]: time="2025-10-27T19:02:42.469424005Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-q864q/hello-world-app" id=3c289bb8-f801-492f-9bb3-3dade6bc7ec7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:02:42 addons-101592 crio[831]: time="2025-10-27T19:02:42.46968001Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:02:42 addons-101592 crio[831]: time="2025-10-27T19:02:42.483779687Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:02:42 addons-101592 crio[831]: time="2025-10-27T19:02:42.4842016Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/9caf3529a7e9a813ddf70fb040c12447d17424dfaf69c0e831238bdb84bac99c/merged/etc/passwd: no such file or directory"
	Oct 27 19:02:42 addons-101592 crio[831]: time="2025-10-27T19:02:42.484305505Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/9caf3529a7e9a813ddf70fb040c12447d17424dfaf69c0e831238bdb84bac99c/merged/etc/group: no such file or directory"
	Oct 27 19:02:42 addons-101592 crio[831]: time="2025-10-27T19:02:42.484664349Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:02:42 addons-101592 crio[831]: time="2025-10-27T19:02:42.517382106Z" level=info msg="Created container 40bf10c9ad955ef4fef8221da7d47d720b4208d9f75b9146c16bb37327ef31ce: default/hello-world-app-5d498dc89-q864q/hello-world-app" id=3c289bb8-f801-492f-9bb3-3dade6bc7ec7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:02:42 addons-101592 crio[831]: time="2025-10-27T19:02:42.522134533Z" level=info msg="Starting container: 40bf10c9ad955ef4fef8221da7d47d720b4208d9f75b9146c16bb37327ef31ce" id=3ecbbd47-09d9-49d0-9d49-7671c8ad87b7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:02:42 addons-101592 crio[831]: time="2025-10-27T19:02:42.526650557Z" level=info msg="Started container" PID=7150 containerID=40bf10c9ad955ef4fef8221da7d47d720b4208d9f75b9146c16bb37327ef31ce description=default/hello-world-app-5d498dc89-q864q/hello-world-app id=3ecbbd47-09d9-49d0-9d49-7671c8ad87b7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4f046711559854d21a5101092b7f19bd64ea40037bebc84be3ab74a5e73d0d3d
	Oct 27 19:02:42 addons-101592 crio[831]: time="2025-10-27T19:02:42.702447534Z" level=info msg="Image operating system mismatch: image uses OS \"linux\"+architecture \"amd64\"+\"\", expecting one of \"linux+arm64+\\\"v8\\\", linux+arm64+\\\"\\\"\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	40bf10c9ad955       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        1 second ago        Running             hello-world-app                          0                   4f04671155985       hello-world-app-5d498dc89-q864q             default
	27c7b2ac4d6c6       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0                                              2 minutes ago       Running             nginx                                    0                   304d5487099dd       nginx                                       default
	0f6f995637394       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          2 minutes ago       Running             busybox                                  0                   5db5ba7be1965       busybox                                     default
	3e163786b302c       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago       Running             csi-snapshotter                          0                   6cf931804122e       csi-hostpathplugin-42bzh                    kube-system
	838eb4978f205       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago       Running             csi-provisioner                          0                   6cf931804122e       csi-hostpathplugin-42bzh                    kube-system
	d6943420da6a4       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago       Running             liveness-probe                           0                   6cf931804122e       csi-hostpathplugin-42bzh                    kube-system
	7e2b5aeafd005       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago       Running             hostpath                                 0                   6cf931804122e       csi-hostpathplugin-42bzh                    kube-system
	2aa2da6d8f06b       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago       Running             node-driver-registrar                    0                   6cf931804122e       csi-hostpathplugin-42bzh                    kube-system
	0a9000dd55e0a       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             3 minutes ago       Running             controller                               0                   79cefa06360e1       ingress-nginx-controller-675c5ddd98-ql9nw   ingress-nginx
	7adff4fa7265c       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago       Running             gcp-auth                                 0                   8dbf1515b6d55       gcp-auth-78565c9fb4-vrdnc                   gcp-auth
	c0bc4ccc46eff       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            3 minutes ago       Running             gadget                                   0                   618425097f045       gadget-647wx                                gadget
	50e0b6b85b8a7       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago       Running             registry-proxy                           0                   4cdae724316c7       registry-proxy-k87sb                        kube-system
	cd4e827788ce9       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago       Running             csi-external-health-monitor-controller   0                   6cf931804122e       csi-hostpathplugin-42bzh                    kube-system
	7f055e0b328c7       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     3 minutes ago       Running             nvidia-device-plugin-ctr                 0                   934eac4cc468b       nvidia-device-plugin-daemonset-sghjb        kube-system
	898e5fee8fdcf       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             3 minutes ago       Exited              patch                                    2                   0a4e7fbf2d470       ingress-nginx-admission-patch-6wkkl         ingress-nginx
	7e472e3b179a0       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago       Running             csi-resizer                              0                   e505c927ba76d       csi-hostpath-resizer-0                      kube-system
	3043062bbd230       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   3 minutes ago       Exited              create                                   0                   00e18a96f48c5       ingress-nginx-admission-create-6hkms        ingress-nginx
	a9d1dbc41feea       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           3 minutes ago       Running             registry                                 0                   b5e278c9e2a43       registry-6b586f9694-jvgtv                   kube-system
	bfa31a6efbb78       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago       Running             volume-snapshot-controller               0                   2528bbcc805a3       snapshot-controller-7d9fbc56b8-pqsgt        kube-system
	8c0b8c2d5a795       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago       Running             local-path-provisioner                   0                   7cf5a05b8948a       local-path-provisioner-648f6765c9-jcsl9     local-path-storage
	458122abc2a23       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago       Running             volume-snapshot-controller               0                   c9c757080912c       snapshot-controller-7d9fbc56b8-pvz49        kube-system
	2d95c80ef718c       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago       Running             csi-attacher                             0                   02e3dbfc1d399       csi-hostpath-attacher-0                     kube-system
	820e580b2ebe2       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               3 minutes ago       Running             cloud-spanner-emulator                   0                   46f7814a88ada       cloud-spanner-emulator-86bd5cbb97-zkplg     default
	a33d881a2daa0       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago       Running             minikube-ingress-dns                     0                   b0ebf26d8430e       kube-ingress-dns-minikube                   kube-system
	8753d9b0a7cb8       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              4 minutes ago       Running             yakd                                     0                   12c44a9aa9966       yakd-dashboard-5ff678cb9-lhv5m              yakd-dashboard
	8ebcbe2c9975f       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        4 minutes ago       Running             metrics-server                           0                   b7aeb407c31fc       metrics-server-85b7d694d7-mmqw2             kube-system
	fe0ea9b1d2cf2       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago       Running             coredns                                  0                   848b0a7a17c8a       coredns-66bc5c9577-kbgn5                    kube-system
	f81e4711fbc01       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago       Running             storage-provisioner                      0                   9f36c68eeea05       storage-provisioner                         kube-system
	28587c37519da       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             4 minutes ago       Running             kube-proxy                               0                   d16d0b93a413f       kube-proxy-k9g92                            kube-system
	1e50342291985       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago       Running             kindnet-cni                              0                   c1995e14af0fe       kindnet-87t7g                               kube-system
	d5efbcd6024e7       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago       Running             etcd                                     0                   ab64199fdc6fa       etcd-addons-101592                          kube-system
	4a399d9b30f5e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago       Running             kube-controller-manager                  0                   e76ab52a5758c       kube-controller-manager-addons-101592       kube-system
	752e65ab367c9       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago       Running             kube-scheduler                           0                   1b88e16ed88a7       kube-scheduler-addons-101592                kube-system
	02d35f7174bb3       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago       Running             kube-apiserver                           0                   5719b22fd6403       kube-apiserver-addons-101592                kube-system
	
	
	==> coredns [fe0ea9b1d2cf2cbefbf8c8c1c5b40f8d072efd2d489b534d519c969d9f5078d6] <==
	[INFO] 10.244.0.16:48505 - 10738 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002079392s
	[INFO] 10.244.0.16:48505 - 50748 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000148534s
	[INFO] 10.244.0.16:48505 - 11375 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000146064s
	[INFO] 10.244.0.16:56548 - 9302 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000151258s
	[INFO] 10.244.0.16:56548 - 9097 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000069332s
	[INFO] 10.244.0.16:52312 - 29241 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000087235s
	[INFO] 10.244.0.16:52312 - 29069 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000066345s
	[INFO] 10.244.0.16:34698 - 7151 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00007601s
	[INFO] 10.244.0.16:34698 - 6707 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000079449s
	[INFO] 10.244.0.16:42050 - 39527 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001439383s
	[INFO] 10.244.0.16:42050 - 39346 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001415663s
	[INFO] 10.244.0.16:45965 - 16759 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000125823s
	[INFO] 10.244.0.16:45965 - 16595 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000130195s
	[INFO] 10.244.0.21:48819 - 47143 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000151865s
	[INFO] 10.244.0.21:44287 - 20735 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000140001s
	[INFO] 10.244.0.21:59920 - 29863 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00010682s
	[INFO] 10.244.0.21:51683 - 23029 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000074624s
	[INFO] 10.244.0.21:48016 - 44962 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000084347s
	[INFO] 10.244.0.21:52341 - 27557 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000095276s
	[INFO] 10.244.0.21:50552 - 29982 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001822464s
	[INFO] 10.244.0.21:55432 - 40925 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001681798s
	[INFO] 10.244.0.21:60096 - 34132 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.000559889s
	[INFO] 10.244.0.21:42923 - 23822 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001679706s
	[INFO] 10.244.0.23:40442 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00016512s
	[INFO] 10.244.0.23:44564 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000104856s
	
	
	==> describe nodes <==
	Name:               addons-101592
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-101592
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=addons-101592
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T18_57_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-101592
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-101592"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 18:57:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-101592
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 19:02:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 19:02:41 +0000   Mon, 27 Oct 2025 18:57:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 19:02:41 +0000   Mon, 27 Oct 2025 18:57:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 19:02:41 +0000   Mon, 27 Oct 2025 18:57:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 19:02:41 +0000   Mon, 27 Oct 2025 18:58:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-101592
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                e04f1509-9fda-4a47-ab13-403e07d0fc28
	  Boot ID:                    23ea05b4-8203-4fb9-a84a-5deb4b091cbb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m59s
	  default                     cloud-spanner-emulator-86bd5cbb97-zkplg      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  default                     hello-world-app-5d498dc89-q864q              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  gadget                      gadget-647wx                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  gcp-auth                    gcp-auth-78565c9fb4-vrdnc                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m44s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-ql9nw    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m47s
	  kube-system                 coredns-66bc5c9577-kbgn5                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m53s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 csi-hostpathplugin-42bzh                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 etcd-addons-101592                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         4m59s
	  kube-system                 kindnet-87t7g                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m54s
	  kube-system                 kube-apiserver-addons-101592                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 kube-controller-manager-addons-101592        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 kube-proxy-k9g92                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 kube-scheduler-addons-101592                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 metrics-server-85b7d694d7-mmqw2              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m49s
	  kube-system                 nvidia-device-plugin-daemonset-sghjb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 registry-6b586f9694-jvgtv                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 registry-creds-764b6fb674-fz96k              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 registry-proxy-k87sb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 snapshot-controller-7d9fbc56b8-pqsgt         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  kube-system                 snapshot-controller-7d9fbc56b8-pvz49         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  local-path-storage          local-path-provisioner-648f6765c9-jcsl9      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-lhv5m               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 4m52s                kube-proxy       
	  Warning  CgroupV1                 5m6s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m6s (x8 over 5m6s)  kubelet          Node addons-101592 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m6s (x8 over 5m6s)  kubelet          Node addons-101592 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m6s (x8 over 5m6s)  kubelet          Node addons-101592 status is now: NodeHasSufficientPID
	  Normal   Starting                 4m59s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m59s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m59s                kubelet          Node addons-101592 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m59s                kubelet          Node addons-101592 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m59s                kubelet          Node addons-101592 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m55s                node-controller  Node addons-101592 event: Registered Node addons-101592 in Controller
	  Normal   NodeReady                4m12s                kubelet          Node addons-101592 status is now: NodeReady
	
	
	==> dmesg <==
	[ +30.305925] overlayfs: idmapped layers are currently not supported
	[Oct27 18:28] overlayfs: idmapped layers are currently not supported
	[Oct27 18:29] overlayfs: idmapped layers are currently not supported
	[Oct27 18:30] overlayfs: idmapped layers are currently not supported
	[ +18.215952] overlayfs: idmapped layers are currently not supported
	[Oct27 18:31] overlayfs: idmapped layers are currently not supported
	[ +35.797174] overlayfs: idmapped layers are currently not supported
	[Oct27 18:32] overlayfs: idmapped layers are currently not supported
	[Oct27 18:34] overlayfs: idmapped layers are currently not supported
	[ +38.178588] overlayfs: idmapped layers are currently not supported
	[Oct27 18:36] overlayfs: idmapped layers are currently not supported
	[ +29.649930] overlayfs: idmapped layers are currently not supported
	[Oct27 18:37] overlayfs: idmapped layers are currently not supported
	[Oct27 18:38] overlayfs: idmapped layers are currently not supported
	[ +26.025304] overlayfs: idmapped layers are currently not supported
	[Oct27 18:39] overlayfs: idmapped layers are currently not supported
	[  +8.720024] overlayfs: idmapped layers are currently not supported
	[Oct27 18:40] overlayfs: idmapped layers are currently not supported
	[Oct27 18:41] overlayfs: idmapped layers are currently not supported
	[Oct27 18:42] overlayfs: idmapped layers are currently not supported
	[Oct27 18:43] overlayfs: idmapped layers are currently not supported
	[Oct27 18:44] overlayfs: idmapped layers are currently not supported
	[ +50.528384] overlayfs: idmapped layers are currently not supported
	[Oct27 18:56] kauditd_printk_skb: 8 callbacks suppressed
	[Oct27 18:57] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d5efbcd6024e7af2a933432a33e2b4197973d93eab2d9c966cb6e488d871f23b] <==
	{"level":"warn","ts":"2025-10-27T18:57:40.851857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:40.867178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:40.882401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:40.899148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:40.915129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:40.928875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:40.942170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:40.957470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:40.983943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:40.987297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:41.006233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:41.021069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:41.035166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:41.050057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:41.064763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:41.094021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:41.107709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:41.121567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:41.184265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:56.811289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:56.824913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:58:18.905554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:58:18.925334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:58:18.934222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:58:18.949147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49202","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [7adff4fa7265c0dee6e25f1e0e66d068a1947079c2f3e24230ee78e5700557ba] <==
	2025/10/27 18:59:28 GCP Auth Webhook started!
	2025/10/27 18:59:44 Ready to marshal response ...
	2025/10/27 18:59:44 Ready to write response ...
	2025/10/27 18:59:44 Ready to marshal response ...
	2025/10/27 18:59:44 Ready to write response ...
	2025/10/27 18:59:45 Ready to marshal response ...
	2025/10/27 18:59:45 Ready to write response ...
	2025/10/27 19:00:07 Ready to marshal response ...
	2025/10/27 19:00:07 Ready to write response ...
	2025/10/27 19:00:17 Ready to marshal response ...
	2025/10/27 19:00:17 Ready to write response ...
	2025/10/27 19:00:22 Ready to marshal response ...
	2025/10/27 19:00:22 Ready to write response ...
	2025/10/27 19:00:34 Ready to marshal response ...
	2025/10/27 19:00:34 Ready to write response ...
	2025/10/27 19:00:55 Ready to marshal response ...
	2025/10/27 19:00:55 Ready to write response ...
	2025/10/27 19:00:55 Ready to marshal response ...
	2025/10/27 19:00:55 Ready to write response ...
	2025/10/27 19:01:02 Ready to marshal response ...
	2025/10/27 19:01:02 Ready to write response ...
	2025/10/27 19:02:41 Ready to marshal response ...
	2025/10/27 19:02:41 Ready to write response ...
	
	
	==> kernel <==
	 19:02:43 up  1:45,  0 user,  load average: 1.03, 1.05, 1.69
	Linux addons-101592 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1e5034229198563b54bd8e5f173e0492eee9301465992e61e3c894cb36e53dd6] <==
	I1027 19:00:41.351655       1 main.go:301] handling current node
	I1027 19:00:51.348933       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:00:51.348982       1 main.go:301] handling current node
	I1027 19:01:01.348975       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:01:01.349023       1 main.go:301] handling current node
	I1027 19:01:11.351507       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:01:11.351630       1 main.go:301] handling current node
	I1027 19:01:21.354105       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:01:21.354148       1 main.go:301] handling current node
	I1027 19:01:31.351587       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:01:31.351640       1 main.go:301] handling current node
	I1027 19:01:41.350840       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:01:41.350874       1 main.go:301] handling current node
	I1027 19:01:51.349101       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:01:51.349147       1 main.go:301] handling current node
	I1027 19:02:01.350908       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:02:01.350972       1 main.go:301] handling current node
	I1027 19:02:11.348931       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:02:11.348963       1 main.go:301] handling current node
	I1027 19:02:21.349497       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:02:21.349531       1 main.go:301] handling current node
	I1027 19:02:31.355046       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:02:31.355078       1 main.go:301] handling current node
	I1027 19:02:41.349377       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:02:41.349501       1 main.go:301] handling current node
	
	
	==> kube-apiserver [02d35f7174bb35a983c70121d66fa26a8b0d33acfa279db1bc1b3d623b4f8f09] <==
	W1027 18:58:31.856425       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.118.75:443: connect: connection refused
	E1027 18:58:31.856450       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.118.75:443: connect: connection refused" logger="UnhandledError"
	W1027 18:58:31.944941       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.118.75:443: connect: connection refused
	E1027 18:58:31.944986       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.118.75:443: connect: connection refused" logger="UnhandledError"
	W1027 18:58:36.862485       1 handler_proxy.go:99] no RequestInfo found in the context
	E1027 18:58:36.862537       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.239.58:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.239.58:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.239.58:443: connect: connection refused" logger="UnhandledError"
	E1027 18:58:36.862568       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1027 18:58:36.863211       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.239.58:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.239.58:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.239.58:443: connect: connection refused" logger="UnhandledError"
	E1027 18:58:36.868446       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.239.58:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.239.58:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.239.58:443: connect: connection refused" logger="UnhandledError"
	E1027 18:58:36.889508       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.239.58:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.239.58:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.239.58:443: connect: connection refused" logger="UnhandledError"
	E1027 18:58:36.931111       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.239.58:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.239.58:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.239.58:443: connect: connection refused" logger="UnhandledError"
	E1027 18:58:37.012884       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.239.58:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.239.58:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.239.58:443: connect: connection refused" logger="UnhandledError"
	I1027 18:58:37.277160       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1027 18:59:55.188817       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:55982: use of closed network connection
	E1027 18:59:55.463221       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:56018: use of closed network connection
	E1027 18:59:55.599585       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:56028: use of closed network connection
	I1027 19:00:22.101569       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1027 19:00:22.409266       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.4.3"}
	I1027 19:00:28.433932       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1027 19:00:30.693958       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	E1027 19:00:41.739254       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1027 19:02:41.428750       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.126.134"}
	
	
	==> kube-controller-manager [4a399d9b30f5e68ec2fe6ac2e4b287b9821a6d963956dc63355919c90bbdbff6] <==
	I1027 18:57:48.924419       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 18:57:48.924657       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1027 18:57:48.924747       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1027 18:57:48.924830       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1027 18:57:48.924861       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1027 18:57:48.924908       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1027 18:57:48.925261       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 18:57:48.927038       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 18:57:48.927246       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 18:57:48.927343       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 18:57:48.930079       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1027 18:57:48.932550       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1027 18:57:48.935090       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 18:57:48.937517       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-101592" podCIDRs=["10.244.0.0/24"]
	I1027 18:57:48.937686       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1027 18:57:48.939069       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	E1027 18:57:54.924820       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I1027 18:58:18.880374       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1027 18:58:18.884490       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	E1027 18:58:18.939326       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1027 18:58:18.939477       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1027 18:58:18.939526       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1027 18:58:18.939553       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 18:58:18.985583       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 18:58:33.879560       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [28587c37519da029d28ad619eccddb0f9cf6be1a53a5a3ca6648894a187e90d2] <==
	I1027 18:57:51.414811       1 server_linux.go:53] "Using iptables proxy"
	I1027 18:57:51.500494       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 18:57:51.600822       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 18:57:51.600869       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1027 18:57:51.600958       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 18:57:51.666090       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 18:57:51.666149       1 server_linux.go:132] "Using iptables Proxier"
	I1027 18:57:51.685554       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 18:57:51.685865       1 server.go:527] "Version info" version="v1.34.1"
	I1027 18:57:51.685883       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 18:57:51.687428       1 config.go:200] "Starting service config controller"
	I1027 18:57:51.687440       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 18:57:51.687467       1 config.go:106] "Starting endpoint slice config controller"
	I1027 18:57:51.687471       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 18:57:51.687482       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 18:57:51.687486       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 18:57:51.688093       1 config.go:309] "Starting node config controller"
	I1027 18:57:51.688100       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 18:57:51.688105       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 18:57:51.787833       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 18:57:51.787878       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 18:57:51.787913       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [752e65ab367c93dbf18c03c14f825896a94d2b17a2a40668c50b5961b7e50972] <==
	E1027 18:57:41.959476       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 18:57:41.959578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 18:57:41.959703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 18:57:41.959827       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 18:57:41.959937       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1027 18:57:41.960052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 18:57:41.960153       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 18:57:41.960249       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 18:57:41.960358       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 18:57:41.960481       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 18:57:41.960584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 18:57:41.960686       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 18:57:41.961663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1027 18:57:41.961878       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 18:57:41.962029       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 18:57:41.962261       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 18:57:41.962330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 18:57:42.882614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1027 18:57:42.895248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 18:57:42.918743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 18:57:42.984091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 18:57:43.028782       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 18:57:43.047194       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 18:57:43.077088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1027 18:57:44.688371       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 19:01:04 addons-101592 kubelet[1297]: I1027 19:01:04.975268    1297 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/53a52a69-2d47-4b85-a518-1b980e2cc278-script\") pod \"53a52a69-2d47-4b85-a518-1b980e2cc278\" (UID: \"53a52a69-2d47-4b85-a518-1b980e2cc278\") "
	Oct 27 19:01:04 addons-101592 kubelet[1297]: I1027 19:01:04.975292    1297 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/53a52a69-2d47-4b85-a518-1b980e2cc278-gcp-creds\") pod \"53a52a69-2d47-4b85-a518-1b980e2cc278\" (UID: \"53a52a69-2d47-4b85-a518-1b980e2cc278\") "
	Oct 27 19:01:04 addons-101592 kubelet[1297]: I1027 19:01:04.975333    1297 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/53a52a69-2d47-4b85-a518-1b980e2cc278-data\") pod \"53a52a69-2d47-4b85-a518-1b980e2cc278\" (UID: \"53a52a69-2d47-4b85-a518-1b980e2cc278\") "
	Oct 27 19:01:04 addons-101592 kubelet[1297]: I1027 19:01:04.975486    1297 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53a52a69-2d47-4b85-a518-1b980e2cc278-data" (OuterVolumeSpecName: "data") pod "53a52a69-2d47-4b85-a518-1b980e2cc278" (UID: "53a52a69-2d47-4b85-a518-1b980e2cc278"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 27 19:01:04 addons-101592 kubelet[1297]: I1027 19:01:04.976115    1297 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53a52a69-2d47-4b85-a518-1b980e2cc278-script" (OuterVolumeSpecName: "script") pod "53a52a69-2d47-4b85-a518-1b980e2cc278" (UID: "53a52a69-2d47-4b85-a518-1b980e2cc278"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Oct 27 19:01:04 addons-101592 kubelet[1297]: I1027 19:01:04.976127    1297 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53a52a69-2d47-4b85-a518-1b980e2cc278-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "53a52a69-2d47-4b85-a518-1b980e2cc278" (UID: "53a52a69-2d47-4b85-a518-1b980e2cc278"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 27 19:01:04 addons-101592 kubelet[1297]: I1027 19:01:04.981945    1297 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53a52a69-2d47-4b85-a518-1b980e2cc278-kube-api-access-jt88m" (OuterVolumeSpecName: "kube-api-access-jt88m") pod "53a52a69-2d47-4b85-a518-1b980e2cc278" (UID: "53a52a69-2d47-4b85-a518-1b980e2cc278"). InnerVolumeSpecName "kube-api-access-jt88m". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 27 19:01:05 addons-101592 kubelet[1297]: I1027 19:01:05.076558    1297 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/53a52a69-2d47-4b85-a518-1b980e2cc278-data\") on node \"addons-101592\" DevicePath \"\""
	Oct 27 19:01:05 addons-101592 kubelet[1297]: I1027 19:01:05.076606    1297 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jt88m\" (UniqueName: \"kubernetes.io/projected/53a52a69-2d47-4b85-a518-1b980e2cc278-kube-api-access-jt88m\") on node \"addons-101592\" DevicePath \"\""
	Oct 27 19:01:05 addons-101592 kubelet[1297]: I1027 19:01:05.076619    1297 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/53a52a69-2d47-4b85-a518-1b980e2cc278-script\") on node \"addons-101592\" DevicePath \"\""
	Oct 27 19:01:05 addons-101592 kubelet[1297]: I1027 19:01:05.076631    1297 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/53a52a69-2d47-4b85-a518-1b980e2cc278-gcp-creds\") on node \"addons-101592\" DevicePath \"\""
	Oct 27 19:01:05 addons-101592 kubelet[1297]: I1027 19:01:05.812722    1297 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cee634a52a9ce6901169fcefd0b85277234117f7a23846cc54c1b8ab89cf212c"
	Oct 27 19:01:05 addons-101592 kubelet[1297]: E1027 19:01:05.814777    1297 status_manager.go:1018] "Failed to get status for pod" err="pods \"helper-pod-delete-pvc-c9f40c89-0f13-48bb-bf71-f70c4746ee6e\" is forbidden: User \"system:node:addons-101592\" cannot get resource \"pods\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-101592' and this object" podUID="53a52a69-2d47-4b85-a518-1b980e2cc278" pod="local-path-storage/helper-pod-delete-pvc-c9f40c89-0f13-48bb-bf71-f70c4746ee6e"
	Oct 27 19:01:06 addons-101592 kubelet[1297]: I1027 19:01:06.381750    1297 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53a52a69-2d47-4b85-a518-1b980e2cc278" path="/var/lib/kubelet/pods/53a52a69-2d47-4b85-a518-1b980e2cc278/volumes"
	Oct 27 19:01:35 addons-101592 kubelet[1297]: I1027 19:01:35.379314    1297 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-jvgtv" secret="" err="secret \"gcp-auth\" not found"
	Oct 27 19:01:44 addons-101592 kubelet[1297]: I1027 19:01:44.544860    1297 scope.go:117] "RemoveContainer" containerID="23aaa7e098a512c90dbf45b1adb39b0af023fca0813ec5855b50060d35e51f78"
	Oct 27 19:01:44 addons-101592 kubelet[1297]: I1027 19:01:44.555841    1297 scope.go:117] "RemoveContainer" containerID="49918b6d00713ee6e6e91022988b3802b1cdd7d87fce864a0ddaf328fe2b966b"
	Oct 27 19:01:58 addons-101592 kubelet[1297]: I1027 19:01:58.379601    1297 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-sghjb" secret="" err="secret \"gcp-auth\" not found"
	Oct 27 19:02:00 addons-101592 kubelet[1297]: I1027 19:02:00.379663    1297 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-k87sb" secret="" err="secret \"gcp-auth\" not found"
	Oct 27 19:02:41 addons-101592 kubelet[1297]: I1027 19:02:41.357223    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkfml\" (UniqueName: \"kubernetes.io/projected/af3727e4-fa58-4625-9e1c-45ab324a01de-kube-api-access-bkfml\") pod \"hello-world-app-5d498dc89-q864q\" (UID: \"af3727e4-fa58-4625-9e1c-45ab324a01de\") " pod="default/hello-world-app-5d498dc89-q864q"
	Oct 27 19:02:41 addons-101592 kubelet[1297]: I1027 19:02:41.357830    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/af3727e4-fa58-4625-9e1c-45ab324a01de-gcp-creds\") pod \"hello-world-app-5d498dc89-q864q\" (UID: \"af3727e4-fa58-4625-9e1c-45ab324a01de\") " pod="default/hello-world-app-5d498dc89-q864q"
	Oct 27 19:02:41 addons-101592 kubelet[1297]: W1027 19:02:41.620248    1297 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6440f0423a17a97cc1b07b80749bdf1a62b64d235db78febedbc667a9b5028d6/crio-4f046711559854d21a5101092b7f19bd64ea40037bebc84be3ab74a5e73d0d3d WatchSource:0}: Error finding container 4f046711559854d21a5101092b7f19bd64ea40037bebc84be3ab74a5e73d0d3d: Status 404 returned error can't find the container with id 4f046711559854d21a5101092b7f19bd64ea40037bebc84be3ab74a5e73d0d3d
	Oct 27 19:02:41 addons-101592 kubelet[1297]: I1027 19:02:41.879897    1297 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-fz96k" secret="" err="secret \"gcp-auth\" not found"
	Oct 27 19:02:41 addons-101592 kubelet[1297]: W1027 19:02:41.937372    1297 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6440f0423a17a97cc1b07b80749bdf1a62b64d235db78febedbc667a9b5028d6/crio-cd8189a7bc95418b6b6b18946a76b003647c06ab1e233077a800061341b6ca26 WatchSource:0}: Error finding container cd8189a7bc95418b6b6b18946a76b003647c06ab1e233077a800061341b6ca26: Status 404 returned error can't find the container with id cd8189a7bc95418b6b6b18946a76b003647c06ab1e233077a800061341b6ca26
	Oct 27 19:02:43 addons-101592 kubelet[1297]: I1027 19:02:43.182266    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-q864q" podStartSLOduration=1.34951099 podStartE2EDuration="2.182248979s" podCreationTimestamp="2025-10-27 19:02:41 +0000 UTC" firstStartedPulling="2025-10-27 19:02:41.623391767 +0000 UTC m=+297.351495867" lastFinishedPulling="2025-10-27 19:02:42.456129757 +0000 UTC m=+298.184233856" observedRunningTime="2025-10-27 19:02:43.181122662 +0000 UTC m=+298.909226762" watchObservedRunningTime="2025-10-27 19:02:43.182248979 +0000 UTC m=+298.910353079"
	
	
	==> storage-provisioner [f81e4711fbc01c871625a0d7da229dd580feb4c12934b3a38b50966b57a4259b] <==
	W1027 19:02:19.997665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:22.000662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:22.005421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:24.009243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:24.022224       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:26.034765       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:26.039468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:28.043139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:28.048484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:30.055427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:30.061431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:32.064639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:32.068830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:34.072506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:34.079183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:36.082262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:36.087019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:38.090901       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:38.095326       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:40.098419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:40.105436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:42.110380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:42.116806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:44.123318       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:02:44.129775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-101592 -n addons-101592
helpers_test.go:269: (dbg) Run:  kubectl --context addons-101592 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-6hkms ingress-nginx-admission-patch-6wkkl
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-101592 describe pod ingress-nginx-admission-create-6hkms ingress-nginx-admission-patch-6wkkl
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-101592 describe pod ingress-nginx-admission-create-6hkms ingress-nginx-admission-patch-6wkkl: exit status 1 (101.882149ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-6hkms" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-6wkkl" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-101592 describe pod ingress-nginx-admission-create-6hkms ingress-nginx-admission-patch-6wkkl: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-101592 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-101592 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (287.602674ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:02:45.684982  278320 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:02:45.686115  278320 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:02:45.686151  278320 out.go:374] Setting ErrFile to fd 2...
	I1027 19:02:45.686173  278320 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:02:45.686476  278320 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 19:02:45.686814  278320 mustload.go:65] Loading cluster: addons-101592
	I1027 19:02:45.687400  278320 config.go:182] Loaded profile config "addons-101592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:02:45.687462  278320 addons.go:606] checking whether the cluster is paused
	I1027 19:02:45.687638  278320 config.go:182] Loaded profile config "addons-101592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:02:45.687673  278320 host.go:66] Checking if "addons-101592" exists ...
	I1027 19:02:45.688320  278320 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 19:02:45.716738  278320 ssh_runner.go:195] Run: systemctl --version
	I1027 19:02:45.716807  278320 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 19:02:45.739739  278320 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 19:02:45.845477  278320 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 19:02:45.845571  278320 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 19:02:45.878942  278320 cri.go:89] found id: "215a8f9c237fc26c8e6028729f174bc00f9840ee6e3d3aceb08acb42f408cee7"
	I1027 19:02:45.878966  278320 cri.go:89] found id: "5382dd686aba31be8bc3837695d523e68d2ab467feea65d73872158fd992ca55"
	I1027 19:02:45.878971  278320 cri.go:89] found id: "3e163786b302c708b17fe3ccf72adc340a75e221cd0896244768e1510b5aa88f"
	I1027 19:02:45.879010  278320 cri.go:89] found id: "838eb4978f205b327ad70c9fe726ab47413794195e774fac18ab9ae207723d6a"
	I1027 19:02:45.879015  278320 cri.go:89] found id: "d6943420da6a4a5c627e3e1859fcefbdc0f116ec559255515e3c76875ef303ec"
	I1027 19:02:45.879019  278320 cri.go:89] found id: "7e2b5aeafd0057d1e7d6c15f07b26a4833c3369e948af52f6ce2a927043ec7ce"
	I1027 19:02:45.879023  278320 cri.go:89] found id: "2aa2da6d8f06b0fc78c255affa19eec7d586298271ac54c0800166ccd1369669"
	I1027 19:02:45.879026  278320 cri.go:89] found id: "50e0b6b85b8a72a9fc162999003b05f4316b2f1d308c55874ee95912e0e1662e"
	I1027 19:02:45.879030  278320 cri.go:89] found id: "cd4e827788ce9c8dcbacf66655ce3bbee1fa66b94eb5c687b755fb2b6f5a5cc1"
	I1027 19:02:45.879038  278320 cri.go:89] found id: "7f055e0b328c77feb86cbf8ba19ed9f73334ca7b97c3bc8004db97b398fbfd75"
	I1027 19:02:45.879046  278320 cri.go:89] found id: "7e472e3b179a0264586ada3ba6103e35c58134a6e668cb3b2b6bf2e04e56940a"
	I1027 19:02:45.879049  278320 cri.go:89] found id: "a9d1dbc41feea92d008c9764ac8993cf482b1be7c976bf1221012c77bb4740da"
	I1027 19:02:45.879053  278320 cri.go:89] found id: "bfa31a6efbb78b65e0d3b92e9683ba4e99525bf68f9335010d133d41e467bee4"
	I1027 19:02:45.879057  278320 cri.go:89] found id: "458122abc2a23963c258beece13449337f47c2ccdc075406236a8a13a8063201"
	I1027 19:02:45.879066  278320 cri.go:89] found id: "2d95c80ef718cfc6804388ffc97159e55cf48ac7c6673231ecb8239d50e7d811"
	I1027 19:02:45.879071  278320 cri.go:89] found id: "a33d881a2daa0476651bd4a08fc46730205cfe03c3681178bb615a5e9635926e"
	I1027 19:02:45.879074  278320 cri.go:89] found id: "8ebcbe2c9975f2a8719a3839dcd24a15c292a81c5c76b171625fac6a82dd851d"
	I1027 19:02:45.879078  278320 cri.go:89] found id: "fe0ea9b1d2cf2cbefbf8c8c1c5b40f8d072efd2d489b534d519c969d9f5078d6"
	I1027 19:02:45.879081  278320 cri.go:89] found id: "f81e4711fbc01c871625a0d7da229dd580feb4c12934b3a38b50966b57a4259b"
	I1027 19:02:45.879084  278320 cri.go:89] found id: "28587c37519da029d28ad619eccddb0f9cf6be1a53a5a3ca6648894a187e90d2"
	I1027 19:02:45.879089  278320 cri.go:89] found id: "1e5034229198563b54bd8e5f173e0492eee9301465992e61e3c894cb36e53dd6"
	I1027 19:02:45.879092  278320 cri.go:89] found id: "d5efbcd6024e7af2a933432a33e2b4197973d93eab2d9c966cb6e488d871f23b"
	I1027 19:02:45.879095  278320 cri.go:89] found id: "4a399d9b30f5e68ec2fe6ac2e4b287b9821a6d963956dc63355919c90bbdbff6"
	I1027 19:02:45.879098  278320 cri.go:89] found id: "752e65ab367c93dbf18c03c14f825896a94d2b17a2a40668c50b5961b7e50972"
	I1027 19:02:45.879101  278320 cri.go:89] found id: "02d35f7174bb35a983c70121d66fa26a8b0d33acfa279db1bc1b3d623b4f8f09"
	I1027 19:02:45.879104  278320 cri.go:89] found id: ""
	I1027 19:02:45.879159  278320 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 19:02:45.894189  278320 out.go:203] 
	W1027 19:02:45.897166  278320 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:02:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:02:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 19:02:45.897192  278320 out.go:285] * 
	* 
	W1027 19:02:45.903416  278320 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 19:02:45.906456  278320 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-101592 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-101592 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-101592 addons disable ingress --alsologtostderr -v=1: exit status 11 (273.378701ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:02:45.965257  278363 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:02:45.966116  278363 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:02:45.966160  278363 out.go:374] Setting ErrFile to fd 2...
	I1027 19:02:45.966181  278363 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:02:45.966500  278363 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 19:02:45.966862  278363 mustload.go:65] Loading cluster: addons-101592
	I1027 19:02:45.967378  278363 config.go:182] Loaded profile config "addons-101592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:02:45.967424  278363 addons.go:606] checking whether the cluster is paused
	I1027 19:02:45.967574  278363 config.go:182] Loaded profile config "addons-101592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:02:45.967606  278363 host.go:66] Checking if "addons-101592" exists ...
	I1027 19:02:45.968112  278363 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 19:02:45.985657  278363 ssh_runner.go:195] Run: systemctl --version
	I1027 19:02:45.985712  278363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 19:02:46.003976  278363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 19:02:46.109699  278363 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 19:02:46.109796  278363 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 19:02:46.150160  278363 cri.go:89] found id: "215a8f9c237fc26c8e6028729f174bc00f9840ee6e3d3aceb08acb42f408cee7"
	I1027 19:02:46.150187  278363 cri.go:89] found id: "5382dd686aba31be8bc3837695d523e68d2ab467feea65d73872158fd992ca55"
	I1027 19:02:46.150193  278363 cri.go:89] found id: "3e163786b302c708b17fe3ccf72adc340a75e221cd0896244768e1510b5aa88f"
	I1027 19:02:46.150197  278363 cri.go:89] found id: "838eb4978f205b327ad70c9fe726ab47413794195e774fac18ab9ae207723d6a"
	I1027 19:02:46.150209  278363 cri.go:89] found id: "d6943420da6a4a5c627e3e1859fcefbdc0f116ec559255515e3c76875ef303ec"
	I1027 19:02:46.150214  278363 cri.go:89] found id: "7e2b5aeafd0057d1e7d6c15f07b26a4833c3369e948af52f6ce2a927043ec7ce"
	I1027 19:02:46.150217  278363 cri.go:89] found id: "2aa2da6d8f06b0fc78c255affa19eec7d586298271ac54c0800166ccd1369669"
	I1027 19:02:46.150220  278363 cri.go:89] found id: "50e0b6b85b8a72a9fc162999003b05f4316b2f1d308c55874ee95912e0e1662e"
	I1027 19:02:46.150223  278363 cri.go:89] found id: "cd4e827788ce9c8dcbacf66655ce3bbee1fa66b94eb5c687b755fb2b6f5a5cc1"
	I1027 19:02:46.150231  278363 cri.go:89] found id: "7f055e0b328c77feb86cbf8ba19ed9f73334ca7b97c3bc8004db97b398fbfd75"
	I1027 19:02:46.150239  278363 cri.go:89] found id: "7e472e3b179a0264586ada3ba6103e35c58134a6e668cb3b2b6bf2e04e56940a"
	I1027 19:02:46.150243  278363 cri.go:89] found id: "a9d1dbc41feea92d008c9764ac8993cf482b1be7c976bf1221012c77bb4740da"
	I1027 19:02:46.150246  278363 cri.go:89] found id: "bfa31a6efbb78b65e0d3b92e9683ba4e99525bf68f9335010d133d41e467bee4"
	I1027 19:02:46.150249  278363 cri.go:89] found id: "458122abc2a23963c258beece13449337f47c2ccdc075406236a8a13a8063201"
	I1027 19:02:46.150253  278363 cri.go:89] found id: "2d95c80ef718cfc6804388ffc97159e55cf48ac7c6673231ecb8239d50e7d811"
	I1027 19:02:46.150258  278363 cri.go:89] found id: "a33d881a2daa0476651bd4a08fc46730205cfe03c3681178bb615a5e9635926e"
	I1027 19:02:46.150267  278363 cri.go:89] found id: "8ebcbe2c9975f2a8719a3839dcd24a15c292a81c5c76b171625fac6a82dd851d"
	I1027 19:02:46.150271  278363 cri.go:89] found id: "fe0ea9b1d2cf2cbefbf8c8c1c5b40f8d072efd2d489b534d519c969d9f5078d6"
	I1027 19:02:46.150274  278363 cri.go:89] found id: "f81e4711fbc01c871625a0d7da229dd580feb4c12934b3a38b50966b57a4259b"
	I1027 19:02:46.150277  278363 cri.go:89] found id: "28587c37519da029d28ad619eccddb0f9cf6be1a53a5a3ca6648894a187e90d2"
	I1027 19:02:46.150282  278363 cri.go:89] found id: "1e5034229198563b54bd8e5f173e0492eee9301465992e61e3c894cb36e53dd6"
	I1027 19:02:46.150285  278363 cri.go:89] found id: "d5efbcd6024e7af2a933432a33e2b4197973d93eab2d9c966cb6e488d871f23b"
	I1027 19:02:46.150288  278363 cri.go:89] found id: "4a399d9b30f5e68ec2fe6ac2e4b287b9821a6d963956dc63355919c90bbdbff6"
	I1027 19:02:46.150291  278363 cri.go:89] found id: "752e65ab367c93dbf18c03c14f825896a94d2b17a2a40668c50b5961b7e50972"
	I1027 19:02:46.150293  278363 cri.go:89] found id: "02d35f7174bb35a983c70121d66fa26a8b0d33acfa279db1bc1b3d623b4f8f09"
	I1027 19:02:46.150296  278363 cri.go:89] found id: ""
	I1027 19:02:46.150355  278363 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 19:02:46.166082  278363 out.go:203] 
	W1027 19:02:46.168873  278363 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:02:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:02:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 19:02:46.168901  278363 out.go:285] * 
	* 
	W1027 19:02:46.174927  278363 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 19:02:46.178029  278363 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-101592 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (144.45s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.33s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-647wx" [250d1ae1-8142-4c2d-a194-ec0b38b6b428] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003550353s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-101592 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-101592 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (322.647713ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:00:21.512397  275809 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:00:21.513201  275809 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:00:21.513213  275809 out.go:374] Setting ErrFile to fd 2...
	I1027 19:00:21.513218  275809 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:00:21.514969  275809 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 19:00:21.515368  275809 mustload.go:65] Loading cluster: addons-101592
	I1027 19:00:21.515740  275809 config.go:182] Loaded profile config "addons-101592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:00:21.515762  275809 addons.go:606] checking whether the cluster is paused
	I1027 19:00:21.515863  275809 config.go:182] Loaded profile config "addons-101592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:00:21.515878  275809 host.go:66] Checking if "addons-101592" exists ...
	I1027 19:00:21.516330  275809 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 19:00:21.535186  275809 ssh_runner.go:195] Run: systemctl --version
	I1027 19:00:21.535247  275809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 19:00:21.555185  275809 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 19:00:21.663007  275809 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 19:00:21.663109  275809 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 19:00:21.704574  275809 cri.go:89] found id: "3e163786b302c708b17fe3ccf72adc340a75e221cd0896244768e1510b5aa88f"
	I1027 19:00:21.704600  275809 cri.go:89] found id: "838eb4978f205b327ad70c9fe726ab47413794195e774fac18ab9ae207723d6a"
	I1027 19:00:21.704605  275809 cri.go:89] found id: "d6943420da6a4a5c627e3e1859fcefbdc0f116ec559255515e3c76875ef303ec"
	I1027 19:00:21.704609  275809 cri.go:89] found id: "7e2b5aeafd0057d1e7d6c15f07b26a4833c3369e948af52f6ce2a927043ec7ce"
	I1027 19:00:21.704612  275809 cri.go:89] found id: "2aa2da6d8f06b0fc78c255affa19eec7d586298271ac54c0800166ccd1369669"
	I1027 19:00:21.704616  275809 cri.go:89] found id: "50e0b6b85b8a72a9fc162999003b05f4316b2f1d308c55874ee95912e0e1662e"
	I1027 19:00:21.704619  275809 cri.go:89] found id: "cd4e827788ce9c8dcbacf66655ce3bbee1fa66b94eb5c687b755fb2b6f5a5cc1"
	I1027 19:00:21.704622  275809 cri.go:89] found id: "7f055e0b328c77feb86cbf8ba19ed9f73334ca7b97c3bc8004db97b398fbfd75"
	I1027 19:00:21.704625  275809 cri.go:89] found id: "7e472e3b179a0264586ada3ba6103e35c58134a6e668cb3b2b6bf2e04e56940a"
	I1027 19:00:21.704634  275809 cri.go:89] found id: "a9d1dbc41feea92d008c9764ac8993cf482b1be7c976bf1221012c77bb4740da"
	I1027 19:00:21.704638  275809 cri.go:89] found id: "bfa31a6efbb78b65e0d3b92e9683ba4e99525bf68f9335010d133d41e467bee4"
	I1027 19:00:21.704641  275809 cri.go:89] found id: "458122abc2a23963c258beece13449337f47c2ccdc075406236a8a13a8063201"
	I1027 19:00:21.704645  275809 cri.go:89] found id: "2d95c80ef718cfc6804388ffc97159e55cf48ac7c6673231ecb8239d50e7d811"
	I1027 19:00:21.704648  275809 cri.go:89] found id: "a33d881a2daa0476651bd4a08fc46730205cfe03c3681178bb615a5e9635926e"
	I1027 19:00:21.704651  275809 cri.go:89] found id: "8ebcbe2c9975f2a8719a3839dcd24a15c292a81c5c76b171625fac6a82dd851d"
	I1027 19:00:21.704660  275809 cri.go:89] found id: "fe0ea9b1d2cf2cbefbf8c8c1c5b40f8d072efd2d489b534d519c969d9f5078d6"
	I1027 19:00:21.704667  275809 cri.go:89] found id: "f81e4711fbc01c871625a0d7da229dd580feb4c12934b3a38b50966b57a4259b"
	I1027 19:00:21.704673  275809 cri.go:89] found id: "28587c37519da029d28ad619eccddb0f9cf6be1a53a5a3ca6648894a187e90d2"
	I1027 19:00:21.704676  275809 cri.go:89] found id: "1e5034229198563b54bd8e5f173e0492eee9301465992e61e3c894cb36e53dd6"
	I1027 19:00:21.704679  275809 cri.go:89] found id: "d5efbcd6024e7af2a933432a33e2b4197973d93eab2d9c966cb6e488d871f23b"
	I1027 19:00:21.704684  275809 cri.go:89] found id: "4a399d9b30f5e68ec2fe6ac2e4b287b9821a6d963956dc63355919c90bbdbff6"
	I1027 19:00:21.704687  275809 cri.go:89] found id: "752e65ab367c93dbf18c03c14f825896a94d2b17a2a40668c50b5961b7e50972"
	I1027 19:00:21.704690  275809 cri.go:89] found id: "02d35f7174bb35a983c70121d66fa26a8b0d33acfa279db1bc1b3d623b4f8f09"
	I1027 19:00:21.704694  275809 cri.go:89] found id: ""
	I1027 19:00:21.704743  275809 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 19:00:21.722090  275809 out.go:203] 
	W1027 19:00:21.723592  275809 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:00:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:00:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 19:00:21.723626  275809 out.go:285] * 
	* 
	W1027 19:00:21.729644  275809 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 19:00:21.731590  275809 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-101592 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.33s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.39s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.561123ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-mmqw2" [38c42c1b-338f-4857-a519-d3a5ae92f76f] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00354681s
addons_test.go:463: (dbg) Run:  kubectl --context addons-101592 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-101592 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-101592 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (292.098233ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:00:16.203634  275664 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:00:16.204362  275664 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:00:16.204376  275664 out.go:374] Setting ErrFile to fd 2...
	I1027 19:00:16.204382  275664 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:00:16.204636  275664 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 19:00:16.204930  275664 mustload.go:65] Loading cluster: addons-101592
	I1027 19:00:16.205286  275664 config.go:182] Loaded profile config "addons-101592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:00:16.205303  275664 addons.go:606] checking whether the cluster is paused
	I1027 19:00:16.205400  275664 config.go:182] Loaded profile config "addons-101592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:00:16.205415  275664 host.go:66] Checking if "addons-101592" exists ...
	I1027 19:00:16.205845  275664 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 19:00:16.223929  275664 ssh_runner.go:195] Run: systemctl --version
	I1027 19:00:16.223995  275664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 19:00:16.244060  275664 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 19:00:16.349509  275664 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 19:00:16.349601  275664 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 19:00:16.381354  275664 cri.go:89] found id: "3e163786b302c708b17fe3ccf72adc340a75e221cd0896244768e1510b5aa88f"
	I1027 19:00:16.381380  275664 cri.go:89] found id: "838eb4978f205b327ad70c9fe726ab47413794195e774fac18ab9ae207723d6a"
	I1027 19:00:16.381385  275664 cri.go:89] found id: "d6943420da6a4a5c627e3e1859fcefbdc0f116ec559255515e3c76875ef303ec"
	I1027 19:00:16.381388  275664 cri.go:89] found id: "7e2b5aeafd0057d1e7d6c15f07b26a4833c3369e948af52f6ce2a927043ec7ce"
	I1027 19:00:16.381391  275664 cri.go:89] found id: "2aa2da6d8f06b0fc78c255affa19eec7d586298271ac54c0800166ccd1369669"
	I1027 19:00:16.381395  275664 cri.go:89] found id: "50e0b6b85b8a72a9fc162999003b05f4316b2f1d308c55874ee95912e0e1662e"
	I1027 19:00:16.381398  275664 cri.go:89] found id: "cd4e827788ce9c8dcbacf66655ce3bbee1fa66b94eb5c687b755fb2b6f5a5cc1"
	I1027 19:00:16.381401  275664 cri.go:89] found id: "7f055e0b328c77feb86cbf8ba19ed9f73334ca7b97c3bc8004db97b398fbfd75"
	I1027 19:00:16.381405  275664 cri.go:89] found id: "7e472e3b179a0264586ada3ba6103e35c58134a6e668cb3b2b6bf2e04e56940a"
	I1027 19:00:16.381411  275664 cri.go:89] found id: "a9d1dbc41feea92d008c9764ac8993cf482b1be7c976bf1221012c77bb4740da"
	I1027 19:00:16.381414  275664 cri.go:89] found id: "bfa31a6efbb78b65e0d3b92e9683ba4e99525bf68f9335010d133d41e467bee4"
	I1027 19:00:16.381417  275664 cri.go:89] found id: "458122abc2a23963c258beece13449337f47c2ccdc075406236a8a13a8063201"
	I1027 19:00:16.381420  275664 cri.go:89] found id: "2d95c80ef718cfc6804388ffc97159e55cf48ac7c6673231ecb8239d50e7d811"
	I1027 19:00:16.381423  275664 cri.go:89] found id: "a33d881a2daa0476651bd4a08fc46730205cfe03c3681178bb615a5e9635926e"
	I1027 19:00:16.381426  275664 cri.go:89] found id: "8ebcbe2c9975f2a8719a3839dcd24a15c292a81c5c76b171625fac6a82dd851d"
	I1027 19:00:16.381432  275664 cri.go:89] found id: "fe0ea9b1d2cf2cbefbf8c8c1c5b40f8d072efd2d489b534d519c969d9f5078d6"
	I1027 19:00:16.381438  275664 cri.go:89] found id: "f81e4711fbc01c871625a0d7da229dd580feb4c12934b3a38b50966b57a4259b"
	I1027 19:00:16.381442  275664 cri.go:89] found id: "28587c37519da029d28ad619eccddb0f9cf6be1a53a5a3ca6648894a187e90d2"
	I1027 19:00:16.381445  275664 cri.go:89] found id: "1e5034229198563b54bd8e5f173e0492eee9301465992e61e3c894cb36e53dd6"
	I1027 19:00:16.381448  275664 cri.go:89] found id: "d5efbcd6024e7af2a933432a33e2b4197973d93eab2d9c966cb6e488d871f23b"
	I1027 19:00:16.381453  275664 cri.go:89] found id: "4a399d9b30f5e68ec2fe6ac2e4b287b9821a6d963956dc63355919c90bbdbff6"
	I1027 19:00:16.381456  275664 cri.go:89] found id: "752e65ab367c93dbf18c03c14f825896a94d2b17a2a40668c50b5961b7e50972"
	I1027 19:00:16.381459  275664 cri.go:89] found id: "02d35f7174bb35a983c70121d66fa26a8b0d33acfa279db1bc1b3d623b4f8f09"
	I1027 19:00:16.381463  275664 cri.go:89] found id: ""
	I1027 19:00:16.381513  275664 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 19:00:16.395507  275664 out.go:203] 
	W1027 19:00:16.396864  275664 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:00:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:00:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 19:00:16.396884  275664 out.go:285] * 
	* 
	W1027 19:00:16.403653  275664 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 19:00:16.405665  275664 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-101592 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.39s)

                                                
                                    
x
+
TestAddons/parallel/CSI (43.83s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1027 18:59:58.917434  267880 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1027 18:59:58.922269  267880 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1027 18:59:58.922300  267880 kapi.go:107] duration metric: took 4.881796ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.891872ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-101592 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101592 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101592 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101592 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101592 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101592 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101592 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101592 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101592 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101592 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101592 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101592 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101592 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101592 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101592 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101592 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101592 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101592 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101592 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101592 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-101592 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [0d120df7-09c3-4fdc-a6fd-b241e8632b11] Pending
helpers_test.go:352: "task-pv-pod" [0d120df7-09c3-4fdc-a6fd-b241e8632b11] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [0d120df7-09c3-4fdc-a6fd-b241e8632b11] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.007994748s
addons_test.go:572: (dbg) Run:  kubectl --context addons-101592 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-101592 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-101592 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-101592 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-101592 delete pod task-pv-pod: (1.147488772s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-101592 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-101592 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101592 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101592 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101592 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101592 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-101592 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [8dd0ecff-cd5a-4ba7-9927-4858ef47e7eb] Pending
helpers_test.go:352: "task-pv-pod-restore" [8dd0ecff-cd5a-4ba7-9927-4858ef47e7eb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [8dd0ecff-cd5a-4ba7-9927-4858ef47e7eb] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003104847s
addons_test.go:614: (dbg) Run:  kubectl --context addons-101592 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-101592 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-101592 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-101592 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-101592 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (350.054475ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:00:42.248168  276577 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:00:42.249431  276577 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:00:42.249502  276577 out.go:374] Setting ErrFile to fd 2...
	I1027 19:00:42.249525  276577 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:00:42.250003  276577 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 19:00:42.250537  276577 mustload.go:65] Loading cluster: addons-101592
	I1027 19:00:42.251331  276577 config.go:182] Loaded profile config "addons-101592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:00:42.251408  276577 addons.go:606] checking whether the cluster is paused
	I1027 19:00:42.251618  276577 config.go:182] Loaded profile config "addons-101592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:00:42.251675  276577 host.go:66] Checking if "addons-101592" exists ...
	I1027 19:00:42.252492  276577 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 19:00:42.283811  276577 ssh_runner.go:195] Run: systemctl --version
	I1027 19:00:42.283886  276577 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 19:00:42.308201  276577 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 19:00:42.422074  276577 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 19:00:42.422177  276577 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 19:00:42.457439  276577 cri.go:89] found id: "3e163786b302c708b17fe3ccf72adc340a75e221cd0896244768e1510b5aa88f"
	I1027 19:00:42.457459  276577 cri.go:89] found id: "838eb4978f205b327ad70c9fe726ab47413794195e774fac18ab9ae207723d6a"
	I1027 19:00:42.457466  276577 cri.go:89] found id: "d6943420da6a4a5c627e3e1859fcefbdc0f116ec559255515e3c76875ef303ec"
	I1027 19:00:42.457489  276577 cri.go:89] found id: "7e2b5aeafd0057d1e7d6c15f07b26a4833c3369e948af52f6ce2a927043ec7ce"
	I1027 19:00:42.457493  276577 cri.go:89] found id: "2aa2da6d8f06b0fc78c255affa19eec7d586298271ac54c0800166ccd1369669"
	I1027 19:00:42.457498  276577 cri.go:89] found id: "50e0b6b85b8a72a9fc162999003b05f4316b2f1d308c55874ee95912e0e1662e"
	I1027 19:00:42.457501  276577 cri.go:89] found id: "cd4e827788ce9c8dcbacf66655ce3bbee1fa66b94eb5c687b755fb2b6f5a5cc1"
	I1027 19:00:42.457505  276577 cri.go:89] found id: "7f055e0b328c77feb86cbf8ba19ed9f73334ca7b97c3bc8004db97b398fbfd75"
	I1027 19:00:42.457508  276577 cri.go:89] found id: "7e472e3b179a0264586ada3ba6103e35c58134a6e668cb3b2b6bf2e04e56940a"
	I1027 19:00:42.457515  276577 cri.go:89] found id: "a9d1dbc41feea92d008c9764ac8993cf482b1be7c976bf1221012c77bb4740da"
	I1027 19:00:42.457521  276577 cri.go:89] found id: "bfa31a6efbb78b65e0d3b92e9683ba4e99525bf68f9335010d133d41e467bee4"
	I1027 19:00:42.457524  276577 cri.go:89] found id: "458122abc2a23963c258beece13449337f47c2ccdc075406236a8a13a8063201"
	I1027 19:00:42.457527  276577 cri.go:89] found id: "2d95c80ef718cfc6804388ffc97159e55cf48ac7c6673231ecb8239d50e7d811"
	I1027 19:00:42.457530  276577 cri.go:89] found id: "a33d881a2daa0476651bd4a08fc46730205cfe03c3681178bb615a5e9635926e"
	I1027 19:00:42.457533  276577 cri.go:89] found id: "8ebcbe2c9975f2a8719a3839dcd24a15c292a81c5c76b171625fac6a82dd851d"
	I1027 19:00:42.457540  276577 cri.go:89] found id: "fe0ea9b1d2cf2cbefbf8c8c1c5b40f8d072efd2d489b534d519c969d9f5078d6"
	I1027 19:00:42.457545  276577 cri.go:89] found id: "f81e4711fbc01c871625a0d7da229dd580feb4c12934b3a38b50966b57a4259b"
	I1027 19:00:42.457550  276577 cri.go:89] found id: "28587c37519da029d28ad619eccddb0f9cf6be1a53a5a3ca6648894a187e90d2"
	I1027 19:00:42.457553  276577 cri.go:89] found id: "1e5034229198563b54bd8e5f173e0492eee9301465992e61e3c894cb36e53dd6"
	I1027 19:00:42.457556  276577 cri.go:89] found id: "d5efbcd6024e7af2a933432a33e2b4197973d93eab2d9c966cb6e488d871f23b"
	I1027 19:00:42.457560  276577 cri.go:89] found id: "4a399d9b30f5e68ec2fe6ac2e4b287b9821a6d963956dc63355919c90bbdbff6"
	I1027 19:00:42.457563  276577 cri.go:89] found id: "752e65ab367c93dbf18c03c14f825896a94d2b17a2a40668c50b5961b7e50972"
	I1027 19:00:42.457566  276577 cri.go:89] found id: "02d35f7174bb35a983c70121d66fa26a8b0d33acfa279db1bc1b3d623b4f8f09"
	I1027 19:00:42.457570  276577 cri.go:89] found id: ""
	I1027 19:00:42.457669  276577 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 19:00:42.473492  276577 out.go:203] 
	W1027 19:00:42.476517  276577 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:00:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:00:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 19:00:42.476547  276577 out.go:285] * 
	* 
	W1027 19:00:42.482570  276577 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 19:00:42.485736  276577 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-101592 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-101592 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-101592 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (255.422634ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:00:42.539502  276620 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:00:42.540284  276620 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:00:42.540297  276620 out.go:374] Setting ErrFile to fd 2...
	I1027 19:00:42.540303  276620 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:00:42.540568  276620 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 19:00:42.540880  276620 mustload.go:65] Loading cluster: addons-101592
	I1027 19:00:42.541249  276620 config.go:182] Loaded profile config "addons-101592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:00:42.541266  276620 addons.go:606] checking whether the cluster is paused
	I1027 19:00:42.541363  276620 config.go:182] Loaded profile config "addons-101592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:00:42.541377  276620 host.go:66] Checking if "addons-101592" exists ...
	I1027 19:00:42.541913  276620 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 19:00:42.559502  276620 ssh_runner.go:195] Run: systemctl --version
	I1027 19:00:42.559557  276620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 19:00:42.577913  276620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 19:00:42.685562  276620 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 19:00:42.685660  276620 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 19:00:42.713890  276620 cri.go:89] found id: "3e163786b302c708b17fe3ccf72adc340a75e221cd0896244768e1510b5aa88f"
	I1027 19:00:42.713915  276620 cri.go:89] found id: "838eb4978f205b327ad70c9fe726ab47413794195e774fac18ab9ae207723d6a"
	I1027 19:00:42.713932  276620 cri.go:89] found id: "d6943420da6a4a5c627e3e1859fcefbdc0f116ec559255515e3c76875ef303ec"
	I1027 19:00:42.713936  276620 cri.go:89] found id: "7e2b5aeafd0057d1e7d6c15f07b26a4833c3369e948af52f6ce2a927043ec7ce"
	I1027 19:00:42.713941  276620 cri.go:89] found id: "2aa2da6d8f06b0fc78c255affa19eec7d586298271ac54c0800166ccd1369669"
	I1027 19:00:42.713944  276620 cri.go:89] found id: "50e0b6b85b8a72a9fc162999003b05f4316b2f1d308c55874ee95912e0e1662e"
	I1027 19:00:42.713948  276620 cri.go:89] found id: "cd4e827788ce9c8dcbacf66655ce3bbee1fa66b94eb5c687b755fb2b6f5a5cc1"
	I1027 19:00:42.713952  276620 cri.go:89] found id: "7f055e0b328c77feb86cbf8ba19ed9f73334ca7b97c3bc8004db97b398fbfd75"
	I1027 19:00:42.713956  276620 cri.go:89] found id: "7e472e3b179a0264586ada3ba6103e35c58134a6e668cb3b2b6bf2e04e56940a"
	I1027 19:00:42.713963  276620 cri.go:89] found id: "a9d1dbc41feea92d008c9764ac8993cf482b1be7c976bf1221012c77bb4740da"
	I1027 19:00:42.713967  276620 cri.go:89] found id: "bfa31a6efbb78b65e0d3b92e9683ba4e99525bf68f9335010d133d41e467bee4"
	I1027 19:00:42.713970  276620 cri.go:89] found id: "458122abc2a23963c258beece13449337f47c2ccdc075406236a8a13a8063201"
	I1027 19:00:42.713974  276620 cri.go:89] found id: "2d95c80ef718cfc6804388ffc97159e55cf48ac7c6673231ecb8239d50e7d811"
	I1027 19:00:42.713977  276620 cri.go:89] found id: "a33d881a2daa0476651bd4a08fc46730205cfe03c3681178bb615a5e9635926e"
	I1027 19:00:42.713981  276620 cri.go:89] found id: "8ebcbe2c9975f2a8719a3839dcd24a15c292a81c5c76b171625fac6a82dd851d"
	I1027 19:00:42.713991  276620 cri.go:89] found id: "fe0ea9b1d2cf2cbefbf8c8c1c5b40f8d072efd2d489b534d519c969d9f5078d6"
	I1027 19:00:42.713998  276620 cri.go:89] found id: "f81e4711fbc01c871625a0d7da229dd580feb4c12934b3a38b50966b57a4259b"
	I1027 19:00:42.714003  276620 cri.go:89] found id: "28587c37519da029d28ad619eccddb0f9cf6be1a53a5a3ca6648894a187e90d2"
	I1027 19:00:42.714007  276620 cri.go:89] found id: "1e5034229198563b54bd8e5f173e0492eee9301465992e61e3c894cb36e53dd6"
	I1027 19:00:42.714010  276620 cri.go:89] found id: "d5efbcd6024e7af2a933432a33e2b4197973d93eab2d9c966cb6e488d871f23b"
	I1027 19:00:42.714014  276620 cri.go:89] found id: "4a399d9b30f5e68ec2fe6ac2e4b287b9821a6d963956dc63355919c90bbdbff6"
	I1027 19:00:42.714026  276620 cri.go:89] found id: "752e65ab367c93dbf18c03c14f825896a94d2b17a2a40668c50b5961b7e50972"
	I1027 19:00:42.714029  276620 cri.go:89] found id: "02d35f7174bb35a983c70121d66fa26a8b0d33acfa279db1bc1b3d623b4f8f09"
	I1027 19:00:42.714032  276620 cri.go:89] found id: ""
	I1027 19:00:42.714086  276620 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 19:00:42.729637  276620 out.go:203] 
	W1027 19:00:42.732462  276620 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:00:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:00:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 19:00:42.732483  276620 out.go:285] * 
	* 
	W1027 19:00:42.738630  276620 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 19:00:42.741679  276620 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-101592 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (43.83s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.06s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-101592 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-101592 --alsologtostderr -v=1: exit status 11 (274.465261ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 18:59:55.908478  274852 out.go:360] Setting OutFile to fd 1 ...
	I1027 18:59:55.909244  274852 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:59:55.909257  274852 out.go:374] Setting ErrFile to fd 2...
	I1027 18:59:55.909262  274852 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:59:55.909534  274852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 18:59:55.909830  274852 mustload.go:65] Loading cluster: addons-101592
	I1027 18:59:55.910197  274852 config.go:182] Loaded profile config "addons-101592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:59:55.910215  274852 addons.go:606] checking whether the cluster is paused
	I1027 18:59:55.910350  274852 config.go:182] Loaded profile config "addons-101592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:59:55.910367  274852 host.go:66] Checking if "addons-101592" exists ...
	I1027 18:59:55.910831  274852 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:59:55.930157  274852 ssh_runner.go:195] Run: systemctl --version
	I1027 18:59:55.930245  274852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:59:55.949580  274852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:59:56.054204  274852 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 18:59:56.054284  274852 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 18:59:56.086522  274852 cri.go:89] found id: "3e163786b302c708b17fe3ccf72adc340a75e221cd0896244768e1510b5aa88f"
	I1027 18:59:56.086546  274852 cri.go:89] found id: "838eb4978f205b327ad70c9fe726ab47413794195e774fac18ab9ae207723d6a"
	I1027 18:59:56.086553  274852 cri.go:89] found id: "d6943420da6a4a5c627e3e1859fcefbdc0f116ec559255515e3c76875ef303ec"
	I1027 18:59:56.086558  274852 cri.go:89] found id: "7e2b5aeafd0057d1e7d6c15f07b26a4833c3369e948af52f6ce2a927043ec7ce"
	I1027 18:59:56.086563  274852 cri.go:89] found id: "2aa2da6d8f06b0fc78c255affa19eec7d586298271ac54c0800166ccd1369669"
	I1027 18:59:56.086567  274852 cri.go:89] found id: "50e0b6b85b8a72a9fc162999003b05f4316b2f1d308c55874ee95912e0e1662e"
	I1027 18:59:56.086570  274852 cri.go:89] found id: "cd4e827788ce9c8dcbacf66655ce3bbee1fa66b94eb5c687b755fb2b6f5a5cc1"
	I1027 18:59:56.086574  274852 cri.go:89] found id: "7f055e0b328c77feb86cbf8ba19ed9f73334ca7b97c3bc8004db97b398fbfd75"
	I1027 18:59:56.086579  274852 cri.go:89] found id: "7e472e3b179a0264586ada3ba6103e35c58134a6e668cb3b2b6bf2e04e56940a"
	I1027 18:59:56.086629  274852 cri.go:89] found id: "a9d1dbc41feea92d008c9764ac8993cf482b1be7c976bf1221012c77bb4740da"
	I1027 18:59:56.086634  274852 cri.go:89] found id: "bfa31a6efbb78b65e0d3b92e9683ba4e99525bf68f9335010d133d41e467bee4"
	I1027 18:59:56.086637  274852 cri.go:89] found id: "458122abc2a23963c258beece13449337f47c2ccdc075406236a8a13a8063201"
	I1027 18:59:56.086641  274852 cri.go:89] found id: "2d95c80ef718cfc6804388ffc97159e55cf48ac7c6673231ecb8239d50e7d811"
	I1027 18:59:56.086644  274852 cri.go:89] found id: "a33d881a2daa0476651bd4a08fc46730205cfe03c3681178bb615a5e9635926e"
	I1027 18:59:56.086647  274852 cri.go:89] found id: "8ebcbe2c9975f2a8719a3839dcd24a15c292a81c5c76b171625fac6a82dd851d"
	I1027 18:59:56.086652  274852 cri.go:89] found id: "fe0ea9b1d2cf2cbefbf8c8c1c5b40f8d072efd2d489b534d519c969d9f5078d6"
	I1027 18:59:56.086656  274852 cri.go:89] found id: "f81e4711fbc01c871625a0d7da229dd580feb4c12934b3a38b50966b57a4259b"
	I1027 18:59:56.086660  274852 cri.go:89] found id: "28587c37519da029d28ad619eccddb0f9cf6be1a53a5a3ca6648894a187e90d2"
	I1027 18:59:56.086664  274852 cri.go:89] found id: "1e5034229198563b54bd8e5f173e0492eee9301465992e61e3c894cb36e53dd6"
	I1027 18:59:56.086667  274852 cri.go:89] found id: "d5efbcd6024e7af2a933432a33e2b4197973d93eab2d9c966cb6e488d871f23b"
	I1027 18:59:56.086672  274852 cri.go:89] found id: "4a399d9b30f5e68ec2fe6ac2e4b287b9821a6d963956dc63355919c90bbdbff6"
	I1027 18:59:56.086675  274852 cri.go:89] found id: "752e65ab367c93dbf18c03c14f825896a94d2b17a2a40668c50b5961b7e50972"
	I1027 18:59:56.086680  274852 cri.go:89] found id: "02d35f7174bb35a983c70121d66fa26a8b0d33acfa279db1bc1b3d623b4f8f09"
	I1027 18:59:56.086686  274852 cri.go:89] found id: ""
	I1027 18:59:56.086739  274852 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 18:59:56.104612  274852 out.go:203] 
	W1027 18:59:56.108895  274852 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T18:59:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T18:59:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 18:59:56.108917  274852 out.go:285] * 
	* 
	W1027 18:59:56.115057  274852 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 18:59:56.119388  274852 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-101592 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-101592
helpers_test.go:243: (dbg) docker inspect addons-101592:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6440f0423a17a97cc1b07b80749bdf1a62b64d235db78febedbc667a9b5028d6",
	        "Created": "2025-10-27T18:57:16.770574053Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 269038,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T18:57:16.832542873Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/6440f0423a17a97cc1b07b80749bdf1a62b64d235db78febedbc667a9b5028d6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6440f0423a17a97cc1b07b80749bdf1a62b64d235db78febedbc667a9b5028d6/hostname",
	        "HostsPath": "/var/lib/docker/containers/6440f0423a17a97cc1b07b80749bdf1a62b64d235db78febedbc667a9b5028d6/hosts",
	        "LogPath": "/var/lib/docker/containers/6440f0423a17a97cc1b07b80749bdf1a62b64d235db78febedbc667a9b5028d6/6440f0423a17a97cc1b07b80749bdf1a62b64d235db78febedbc667a9b5028d6-json.log",
	        "Name": "/addons-101592",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-101592:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-101592",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6440f0423a17a97cc1b07b80749bdf1a62b64d235db78febedbc667a9b5028d6",
	                "LowerDir": "/var/lib/docker/overlay2/3bed65a7e0bb8d93131cdb29b4732b3022c6d31a7f852e0b3b4ecc20b11d66ac-init/diff:/var/lib/docker/overlay2/0567218bbe05e8b59b2aef7ad82032d6716040c1f9d9d73e7de8682101079f76/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3bed65a7e0bb8d93131cdb29b4732b3022c6d31a7f852e0b3b4ecc20b11d66ac/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3bed65a7e0bb8d93131cdb29b4732b3022c6d31a7f852e0b3b4ecc20b11d66ac/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3bed65a7e0bb8d93131cdb29b4732b3022c6d31a7f852e0b3b4ecc20b11d66ac/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-101592",
	                "Source": "/var/lib/docker/volumes/addons-101592/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-101592",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-101592",
	                "name.minikube.sigs.k8s.io": "addons-101592",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5dd906d8fbdda779de355066361d1aff27470acef9ca178b571101e47212b552",
	            "SandboxKey": "/var/run/docker/netns/5dd906d8fbdd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-101592": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:be:94:d1:5f:cc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "322e74e7b4664bad1b9706c3bcec00f024011c8e602d4eba745a9fe7ed7c8852",
	                    "EndpointID": "341d9976d109d95f2f607893bc7d1435407d10e62d754b1dcf765939c04ace01",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-101592",
	                        "6440f0423a17"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-101592 -n addons-101592
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-101592 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-101592 logs -n 25: (1.405286367s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-428457 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-428457   │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:56 UTC │
	│ delete  │ -p download-only-428457                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-428457   │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:56 UTC │
	│ start   │ -o=json --download-only -p download-only-632012 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-632012   │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:56 UTC │
	│ delete  │ -p download-only-632012                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-632012   │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:56 UTC │
	│ delete  │ -p download-only-428457                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-428457   │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:56 UTC │
	│ delete  │ -p download-only-632012                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-632012   │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:56 UTC │
	│ start   │ --download-only -p download-docker-980377 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-980377 │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │                     │
	│ delete  │ -p download-docker-980377                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-980377 │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:56 UTC │
	│ start   │ --download-only -p binary-mirror-324835 --alsologtostderr --binary-mirror http://127.0.0.1:42369 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-324835   │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │                     │
	│ delete  │ -p binary-mirror-324835                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-324835   │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:56 UTC │
	│ addons  │ disable dashboard -p addons-101592                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-101592          │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │                     │
	│ addons  │ enable dashboard -p addons-101592                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-101592          │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │                     │
	│ start   │ -p addons-101592 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-101592          │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:59 UTC │
	│ addons  │ addons-101592 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-101592          │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │                     │
	│ addons  │ addons-101592 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-101592          │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │                     │
	│ addons  │ enable headlamp -p addons-101592 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-101592          │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 18:56:51
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 18:56:51.344310  268639 out.go:360] Setting OutFile to fd 1 ...
	I1027 18:56:51.344912  268639 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:56:51.344930  268639 out.go:374] Setting ErrFile to fd 2...
	I1027 18:56:51.344937  268639 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:56:51.345430  268639 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 18:56:51.345908  268639 out.go:368] Setting JSON to false
	I1027 18:56:51.346688  268639 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5964,"bootTime":1761585448,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1027 18:56:51.346756  268639 start.go:141] virtualization:  
	I1027 18:56:51.349903  268639 out.go:179] * [addons-101592] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 18:56:51.353540  268639 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 18:56:51.353628  268639 notify.go:220] Checking for updates...
	I1027 18:56:51.359121  268639 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 18:56:51.361952  268639 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 18:56:51.364783  268639 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-266035/.minikube
	I1027 18:56:51.367616  268639 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 18:56:51.370388  268639 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 18:56:51.373392  268639 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 18:56:51.402878  268639 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 18:56:51.403029  268639 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 18:56:51.463882  268639 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-27 18:56:51.455127171 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 18:56:51.463987  268639 docker.go:318] overlay module found
	I1027 18:56:51.467217  268639 out.go:179] * Using the docker driver based on user configuration
	I1027 18:56:51.470144  268639 start.go:305] selected driver: docker
	I1027 18:56:51.470170  268639 start.go:925] validating driver "docker" against <nil>
	I1027 18:56:51.470185  268639 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 18:56:51.470890  268639 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 18:56:51.523621  268639 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-10-27 18:56:51.514664242 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 18:56:51.523786  268639 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1027 18:56:51.524020  268639 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 18:56:51.526853  268639 out.go:179] * Using Docker driver with root privileges
	I1027 18:56:51.529709  268639 cni.go:84] Creating CNI manager for ""
	I1027 18:56:51.529781  268639 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 18:56:51.529795  268639 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 18:56:51.529884  268639 start.go:349] cluster config:
	{Name:addons-101592 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-101592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1027 18:56:51.533012  268639 out.go:179] * Starting "addons-101592" primary control-plane node in "addons-101592" cluster
	I1027 18:56:51.535875  268639 cache.go:123] Beginning downloading kic base image for docker with crio
	I1027 18:56:51.538791  268639 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 18:56:51.541546  268639 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 18:56:51.541603  268639 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1027 18:56:51.541616  268639 cache.go:58] Caching tarball of preloaded images
	I1027 18:56:51.541640  268639 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 18:56:51.541701  268639 preload.go:233] Found /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1027 18:56:51.541711  268639 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 18:56:51.542094  268639 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/config.json ...
	I1027 18:56:51.542125  268639 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/config.json: {Name:mk045c40dedbb543bd714b134e668126fe1c7694 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:51.556460  268639 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1027 18:56:51.556625  268639 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1027 18:56:51.556644  268639 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1027 18:56:51.556649  268639 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1027 18:56:51.556656  268639 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1027 18:56:51.556662  268639 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1027 18:57:09.337139  268639 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1027 18:57:09.337179  268639 cache.go:232] Successfully downloaded all kic artifacts
	I1027 18:57:09.337209  268639 start.go:360] acquireMachinesLock for addons-101592: {Name:mk6d8d9111d5dfe86e292b53fd2763254776e2b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 18:57:09.337333  268639 start.go:364] duration metric: took 103.908µs to acquireMachinesLock for "addons-101592"
	I1027 18:57:09.337365  268639 start.go:93] Provisioning new machine with config: &{Name:addons-101592 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-101592 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 18:57:09.337438  268639 start.go:125] createHost starting for "" (driver="docker")
	I1027 18:57:09.340936  268639 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1027 18:57:09.341202  268639 start.go:159] libmachine.API.Create for "addons-101592" (driver="docker")
	I1027 18:57:09.341253  268639 client.go:168] LocalClient.Create starting
	I1027 18:57:09.341386  268639 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem
	I1027 18:57:09.570318  268639 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem
	I1027 18:57:09.981254  268639 cli_runner.go:164] Run: docker network inspect addons-101592 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1027 18:57:09.996548  268639 cli_runner.go:211] docker network inspect addons-101592 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1027 18:57:09.996642  268639 network_create.go:284] running [docker network inspect addons-101592] to gather additional debugging logs...
	I1027 18:57:09.996663  268639 cli_runner.go:164] Run: docker network inspect addons-101592
	W1027 18:57:10.012494  268639 cli_runner.go:211] docker network inspect addons-101592 returned with exit code 1
	I1027 18:57:10.012525  268639 network_create.go:287] error running [docker network inspect addons-101592]: docker network inspect addons-101592: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-101592 not found
	I1027 18:57:10.012540  268639 network_create.go:289] output of [docker network inspect addons-101592]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-101592 not found
	
	** /stderr **
	I1027 18:57:10.012641  268639 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 18:57:10.036040  268639 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001918aa0}
	I1027 18:57:10.036085  268639 network_create.go:124] attempt to create docker network addons-101592 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1027 18:57:10.036165  268639 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-101592 addons-101592
	I1027 18:57:10.095667  268639 network_create.go:108] docker network addons-101592 192.168.49.0/24 created
	I1027 18:57:10.095716  268639 kic.go:121] calculated static IP "192.168.49.2" for the "addons-101592" container
	I1027 18:57:10.095795  268639 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1027 18:57:10.111089  268639 cli_runner.go:164] Run: docker volume create addons-101592 --label name.minikube.sigs.k8s.io=addons-101592 --label created_by.minikube.sigs.k8s.io=true
	I1027 18:57:10.130170  268639 oci.go:103] Successfully created a docker volume addons-101592
	I1027 18:57:10.130268  268639 cli_runner.go:164] Run: docker run --rm --name addons-101592-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-101592 --entrypoint /usr/bin/test -v addons-101592:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1027 18:57:12.298196  268639 cli_runner.go:217] Completed: docker run --rm --name addons-101592-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-101592 --entrypoint /usr/bin/test -v addons-101592:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (2.167887458s)
	I1027 18:57:12.298229  268639 oci.go:107] Successfully prepared a docker volume addons-101592
	I1027 18:57:12.298255  268639 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 18:57:12.298273  268639 kic.go:194] Starting extracting preloaded images to volume ...
	I1027 18:57:12.298341  268639 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-101592:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1027 18:57:16.696204  268639 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-101592:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.397811373s)
	I1027 18:57:16.696234  268639 kic.go:203] duration metric: took 4.397957921s to extract preloaded images to volume ...
	W1027 18:57:16.696382  268639 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1027 18:57:16.696491  268639 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1027 18:57:16.756137  268639 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-101592 --name addons-101592 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-101592 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-101592 --network addons-101592 --ip 192.168.49.2 --volume addons-101592:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1027 18:57:17.059533  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Running}}
	I1027 18:57:17.080775  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:17.107061  268639 cli_runner.go:164] Run: docker exec addons-101592 stat /var/lib/dpkg/alternatives/iptables
	I1027 18:57:17.173121  268639 oci.go:144] the created container "addons-101592" has a running status.
	I1027 18:57:17.173153  268639 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa...
	I1027 18:57:17.327548  268639 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1027 18:57:17.352781  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:17.374890  268639 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1027 18:57:17.374914  268639 kic_runner.go:114] Args: [docker exec --privileged addons-101592 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1027 18:57:17.431709  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:17.463645  268639 machine.go:93] provisionDockerMachine start ...
	I1027 18:57:17.463864  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:17.501069  268639 main.go:141] libmachine: Using SSH client type: native
	I1027 18:57:17.501423  268639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1027 18:57:17.501434  268639 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 18:57:17.503514  268639 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1027 18:57:20.654368  268639 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-101592
	
	I1027 18:57:20.654392  268639 ubuntu.go:182] provisioning hostname "addons-101592"
	I1027 18:57:20.654460  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:20.671213  268639 main.go:141] libmachine: Using SSH client type: native
	I1027 18:57:20.671526  268639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1027 18:57:20.671543  268639 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-101592 && echo "addons-101592" | sudo tee /etc/hostname
	I1027 18:57:20.831851  268639 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-101592
	
	I1027 18:57:20.831942  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:20.851902  268639 main.go:141] libmachine: Using SSH client type: native
	I1027 18:57:20.852218  268639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1027 18:57:20.852237  268639 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-101592' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-101592/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-101592' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 18:57:20.998975  268639 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 18:57:20.999027  268639 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21801-266035/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-266035/.minikube}
	I1027 18:57:20.999047  268639 ubuntu.go:190] setting up certificates
	I1027 18:57:20.999065  268639 provision.go:84] configureAuth start
	I1027 18:57:20.999121  268639 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-101592
	I1027 18:57:21.016601  268639 provision.go:143] copyHostCerts
	I1027 18:57:21.016698  268639 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem (1123 bytes)
	I1027 18:57:21.016860  268639 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem (1675 bytes)
	I1027 18:57:21.016918  268639 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem (1078 bytes)
	I1027 18:57:21.016965  268639 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem org=jenkins.addons-101592 san=[127.0.0.1 192.168.49.2 addons-101592 localhost minikube]
	I1027 18:57:21.332569  268639 provision.go:177] copyRemoteCerts
	I1027 18:57:21.332637  268639 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 18:57:21.332679  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:21.349510  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:57:21.455164  268639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 18:57:21.472974  268639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1027 18:57:21.490811  268639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1027 18:57:21.507969  268639 provision.go:87] duration metric: took 508.890093ms to configureAuth
	I1027 18:57:21.507994  268639 ubuntu.go:206] setting minikube options for container-runtime
	I1027 18:57:21.508185  268639 config.go:182] Loaded profile config "addons-101592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:57:21.508299  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:21.525664  268639 main.go:141] libmachine: Using SSH client type: native
	I1027 18:57:21.526011  268639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1027 18:57:21.526026  268639 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 18:57:21.784028  268639 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 18:57:21.784113  268639 machine.go:96] duration metric: took 4.320377305s to provisionDockerMachine
	I1027 18:57:21.784140  268639 client.go:171] duration metric: took 12.442876081s to LocalClient.Create
	I1027 18:57:21.784187  268639 start.go:167] duration metric: took 12.442983664s to libmachine.API.Create "addons-101592"
	I1027 18:57:21.784211  268639 start.go:293] postStartSetup for "addons-101592" (driver="docker")
	I1027 18:57:21.784235  268639 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 18:57:21.784318  268639 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 18:57:21.784423  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:21.802390  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:57:21.907824  268639 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 18:57:21.910935  268639 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 18:57:21.910964  268639 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 18:57:21.911000  268639 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-266035/.minikube/addons for local assets ...
	I1027 18:57:21.911068  268639 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-266035/.minikube/files for local assets ...
	I1027 18:57:21.911097  268639 start.go:296] duration metric: took 126.866646ms for postStartSetup
	I1027 18:57:21.911412  268639 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-101592
	I1027 18:57:21.927435  268639 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/config.json ...
	I1027 18:57:21.927738  268639 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 18:57:21.928042  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:21.947817  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:57:22.048221  268639 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 18:57:22.053118  268639 start.go:128] duration metric: took 12.715661932s to createHost
	I1027 18:57:22.053144  268639 start.go:83] releasing machines lock for "addons-101592", held for 12.715797199s
	I1027 18:57:22.053219  268639 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-101592
	I1027 18:57:22.070820  268639 ssh_runner.go:195] Run: cat /version.json
	I1027 18:57:22.070873  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:22.070901  268639 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 18:57:22.070957  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:22.091446  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:57:22.107477  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:57:22.286110  268639 ssh_runner.go:195] Run: systemctl --version
	I1027 18:57:22.292273  268639 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 18:57:22.327439  268639 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 18:57:22.331498  268639 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 18:57:22.331566  268639 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 18:57:22.359263  268639 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1027 18:57:22.359285  268639 start.go:495] detecting cgroup driver to use...
	I1027 18:57:22.359318  268639 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1027 18:57:22.359368  268639 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 18:57:22.376291  268639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 18:57:22.388549  268639 docker.go:218] disabling cri-docker service (if available) ...
	I1027 18:57:22.388615  268639 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 18:57:22.406207  268639 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 18:57:22.424890  268639 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 18:57:22.546017  268639 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 18:57:22.672621  268639 docker.go:234] disabling docker service ...
	I1027 18:57:22.672726  268639 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 18:57:22.694438  268639 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 18:57:22.707859  268639 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 18:57:22.822221  268639 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 18:57:22.942216  268639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 18:57:22.954534  268639 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 18:57:22.967923  268639 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 18:57:22.968038  268639 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:57:22.976498  268639 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 18:57:22.976566  268639 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:57:22.984728  268639 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:57:22.992956  268639 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:57:23.001100  268639 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 18:57:23.008913  268639 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:57:23.018345  268639 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:57:23.031958  268639 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:57:23.040438  268639 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 18:57:23.047966  268639 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 18:57:23.054954  268639 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 18:57:23.159325  268639 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 18:57:23.280722  268639 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 18:57:23.280840  268639 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 18:57:23.284720  268639 start.go:563] Will wait 60s for crictl version
	I1027 18:57:23.284804  268639 ssh_runner.go:195] Run: which crictl
	I1027 18:57:23.288557  268639 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 18:57:23.312542  268639 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 18:57:23.312661  268639 ssh_runner.go:195] Run: crio --version
	I1027 18:57:23.342294  268639 ssh_runner.go:195] Run: crio --version
	I1027 18:57:23.371912  268639 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 18:57:23.374767  268639 cli_runner.go:164] Run: docker network inspect addons-101592 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 18:57:23.392051  268639 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1027 18:57:23.395676  268639 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 18:57:23.405280  268639 kubeadm.go:883] updating cluster {Name:addons-101592 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-101592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 18:57:23.405420  268639 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 18:57:23.405501  268639 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 18:57:23.438906  268639 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 18:57:23.438934  268639 crio.go:433] Images already preloaded, skipping extraction
	I1027 18:57:23.439026  268639 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 18:57:23.463549  268639 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 18:57:23.463577  268639 cache_images.go:85] Images are preloaded, skipping loading
	I1027 18:57:23.463586  268639 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1027 18:57:23.463671  268639 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-101592 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-101592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 18:57:23.463753  268639 ssh_runner.go:195] Run: crio config
	I1027 18:57:23.516609  268639 cni.go:84] Creating CNI manager for ""
	I1027 18:57:23.516630  268639 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 18:57:23.516655  268639 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 18:57:23.516705  268639 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-101592 NodeName:addons-101592 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 18:57:23.516856  268639 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-101592"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 18:57:23.516931  268639 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 18:57:23.524912  268639 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 18:57:23.525013  268639 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 18:57:23.532698  268639 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1027 18:57:23.545455  268639 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 18:57:23.558521  268639 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1027 18:57:23.571575  268639 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1027 18:57:23.575048  268639 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 18:57:23.584989  268639 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 18:57:23.707287  268639 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 18:57:23.722305  268639 certs.go:69] Setting up /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592 for IP: 192.168.49.2
	I1027 18:57:23.722375  268639 certs.go:195] generating shared ca certs ...
	I1027 18:57:23.722406  268639 certs.go:227] acquiring lock for ca certs: {Name:mk172548fc54811b51040cc0201d01eb4b3dd19b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:57:23.722577  268639 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21801-266035/.minikube/ca.key
	I1027 18:57:23.791332  268639 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt ...
	I1027 18:57:23.791406  268639 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt: {Name:mkab07fc960645e058a12a29888618199563b2d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:57:23.791609  268639 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-266035/.minikube/ca.key ...
	I1027 18:57:23.791630  268639 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/ca.key: {Name:mkf51e48da48d79f5b53f47b013afa79ea5d78e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:57:23.791725  268639 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.key
	I1027 18:57:24.536456  268639 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.crt ...
	I1027 18:57:24.536489  268639 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.crt: {Name:mkbe7b0e104d91908762b0382eb112f017333bf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:57:24.536687  268639 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.key ...
	I1027 18:57:24.536702  268639 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.key: {Name:mkd483c24f05e0e063c384b3cc3e67b2223c5e0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:57:24.536776  268639 certs.go:257] generating profile certs ...
	I1027 18:57:24.536836  268639 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.key
	I1027 18:57:24.536855  268639 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.crt with IP's: []
	I1027 18:57:24.751302  268639 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.crt ...
	I1027 18:57:24.751336  268639 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.crt: {Name:mk5686a3a2e49db78e669096e059dd37e074b59e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:57:24.751531  268639 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.key ...
	I1027 18:57:24.751545  268639 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.key: {Name:mk1dc59feefb4276e6fb4bcc52102e4ae7b37c45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:57:24.751652  268639 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/apiserver.key.7bf29f12
	I1027 18:57:24.751673  268639 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/apiserver.crt.7bf29f12 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1027 18:57:26.047734  268639 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/apiserver.crt.7bf29f12 ...
	I1027 18:57:26.047769  268639 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/apiserver.crt.7bf29f12: {Name:mkb44ef4b98fce9af3b9c9e924ab7fac7612f78e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:57:26.047985  268639 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/apiserver.key.7bf29f12 ...
	I1027 18:57:26.047999  268639 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/apiserver.key.7bf29f12: {Name:mk169b2631dbedc732b16442eedc054ed2811995 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:57:26.048089  268639 certs.go:382] copying /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/apiserver.crt.7bf29f12 -> /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/apiserver.crt
	I1027 18:57:26.048176  268639 certs.go:386] copying /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/apiserver.key.7bf29f12 -> /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/apiserver.key
	I1027 18:57:26.048230  268639 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/proxy-client.key
	I1027 18:57:26.048254  268639 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/proxy-client.crt with IP's: []
	I1027 18:57:26.925460  268639 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/proxy-client.crt ...
	I1027 18:57:26.925492  268639 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/proxy-client.crt: {Name:mk667682862ed7e86975a2bb3d5fed80ecd1608c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:57:26.925683  268639 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/proxy-client.key ...
	I1027 18:57:26.925697  268639 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/proxy-client.key: {Name:mk46a22031d759578af616ea845bee9a4972120e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:57:26.925896  268639 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 18:57:26.925934  268639 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem (1078 bytes)
	I1027 18:57:26.926029  268639 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem (1123 bytes)
	I1027 18:57:26.926072  268639 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem (1675 bytes)
	I1027 18:57:26.926636  268639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 18:57:26.944584  268639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 18:57:26.962295  268639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 18:57:26.979609  268639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 18:57:26.997025  268639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1027 18:57:27.015956  268639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 18:57:27.035615  268639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 18:57:27.053222  268639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 18:57:27.070087  268639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 18:57:27.087205  268639 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 18:57:27.099638  268639 ssh_runner.go:195] Run: openssl version
	I1027 18:57:27.105895  268639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 18:57:27.114239  268639 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 18:57:27.118191  268639 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1027 18:57:27.118263  268639 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 18:57:27.159444  268639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 18:57:27.167759  268639 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 18:57:27.171419  268639 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 18:57:27.171469  268639 kubeadm.go:400] StartCluster: {Name:addons-101592 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-101592 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 18:57:27.171545  268639 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 18:57:27.171623  268639 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 18:57:27.199269  268639 cri.go:89] found id: ""
	I1027 18:57:27.199338  268639 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 18:57:27.207380  268639 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 18:57:27.215416  268639 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1027 18:57:27.215486  268639 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 18:57:27.223770  268639 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 18:57:27.223788  268639 kubeadm.go:157] found existing configuration files:
	
	I1027 18:57:27.223866  268639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 18:57:27.231730  268639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 18:57:27.231841  268639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 18:57:27.239229  268639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 18:57:27.246908  268639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 18:57:27.247016  268639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 18:57:27.254430  268639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 18:57:27.262322  268639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 18:57:27.262438  268639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 18:57:27.269907  268639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 18:57:27.277824  268639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 18:57:27.277929  268639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 18:57:27.285534  268639 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 18:57:27.325388  268639 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1027 18:57:27.325680  268639 kubeadm.go:318] [preflight] Running pre-flight checks
	I1027 18:57:27.350776  268639 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1027 18:57:27.350934  268639 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1027 18:57:27.351026  268639 kubeadm.go:318] OS: Linux
	I1027 18:57:27.351111  268639 kubeadm.go:318] CGROUPS_CPU: enabled
	I1027 18:57:27.351202  268639 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1027 18:57:27.351285  268639 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1027 18:57:27.351374  268639 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1027 18:57:27.351459  268639 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1027 18:57:27.351544  268639 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1027 18:57:27.351627  268639 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1027 18:57:27.351713  268639 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1027 18:57:27.351800  268639 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1027 18:57:27.415055  268639 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 18:57:27.415247  268639 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 18:57:27.415389  268639 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 18:57:27.427490  268639 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 18:57:27.431694  268639 out.go:252]   - Generating certificates and keys ...
	I1027 18:57:27.431880  268639 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1027 18:57:27.432004  268639 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1027 18:57:27.745776  268639 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 18:57:28.095074  268639 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1027 18:57:28.752082  268639 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1027 18:57:28.971103  268639 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1027 18:57:29.275279  268639 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1027 18:57:29.275624  268639 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-101592 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1027 18:57:29.607454  268639 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1027 18:57:29.607804  268639 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-101592 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1027 18:57:29.962559  268639 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 18:57:31.042682  268639 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 18:57:31.216462  268639 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1027 18:57:31.216774  268639 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 18:57:32.564332  268639 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 18:57:34.217167  268639 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 18:57:34.461204  268639 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 18:57:35.385694  268639 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 18:57:35.931494  268639 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 18:57:35.932078  268639 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 18:57:35.934570  268639 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 18:57:35.938025  268639 out.go:252]   - Booting up control plane ...
	I1027 18:57:35.938134  268639 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 18:57:35.938216  268639 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 18:57:35.938288  268639 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 18:57:35.953613  268639 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 18:57:35.954121  268639 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 18:57:35.961752  268639 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 18:57:35.962082  268639 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 18:57:35.962129  268639 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1027 18:57:36.101101  268639 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 18:57:36.101228  268639 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 18:57:37.107595  268639 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.005121736s
	I1027 18:57:37.109980  268639 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 18:57:37.110086  268639 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1027 18:57:37.110421  268639 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 18:57:37.111461  268639 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 18:57:39.331823  268639 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.220786228s
	I1027 18:57:41.928158  268639 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.817301764s
	I1027 18:57:43.612092  268639 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.50144149s
	I1027 18:57:43.631068  268639 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 18:57:43.652321  268639 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 18:57:43.666198  268639 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 18:57:43.666422  268639 kubeadm.go:318] [mark-control-plane] Marking the node addons-101592 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 18:57:43.678705  268639 kubeadm.go:318] [bootstrap-token] Using token: enw9xt.cnnain00qnmfg1uu
	I1027 18:57:43.681794  268639 out.go:252]   - Configuring RBAC rules ...
	I1027 18:57:43.681928  268639 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 18:57:43.687486  268639 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 18:57:43.698913  268639 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 18:57:43.702835  268639 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 18:57:43.706798  268639 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 18:57:43.710922  268639 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 18:57:44.022325  268639 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 18:57:44.468868  268639 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1027 18:57:45.023983  268639 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1027 18:57:45.024118  268639 kubeadm.go:318] 
	I1027 18:57:45.024199  268639 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1027 18:57:45.024209  268639 kubeadm.go:318] 
	I1027 18:57:45.024291  268639 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1027 18:57:45.024297  268639 kubeadm.go:318] 
	I1027 18:57:45.024323  268639 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1027 18:57:45.024387  268639 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 18:57:45.024441  268639 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 18:57:45.024446  268639 kubeadm.go:318] 
	I1027 18:57:45.024504  268639 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1027 18:57:45.024509  268639 kubeadm.go:318] 
	I1027 18:57:45.024559  268639 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 18:57:45.024564  268639 kubeadm.go:318] 
	I1027 18:57:45.024618  268639 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1027 18:57:45.024698  268639 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 18:57:45.024770  268639 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 18:57:45.024775  268639 kubeadm.go:318] 
	I1027 18:57:45.039654  268639 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 18:57:45.039761  268639 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1027 18:57:45.039766  268639 kubeadm.go:318] 
	I1027 18:57:45.039858  268639 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token enw9xt.cnnain00qnmfg1uu \
	I1027 18:57:45.039969  268639 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:9473f077605f6866eb08c8a712af7c56a0deb8dc2a907ec8cbb4b8ae396e8f1f \
	I1027 18:57:45.039992  268639 kubeadm.go:318] 	--control-plane 
	I1027 18:57:45.039997  268639 kubeadm.go:318] 
	I1027 18:57:45.040089  268639 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1027 18:57:45.040094  268639 kubeadm.go:318] 
	I1027 18:57:45.040182  268639 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token enw9xt.cnnain00qnmfg1uu \
	I1027 18:57:45.040292  268639 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:9473f077605f6866eb08c8a712af7c56a0deb8dc2a907ec8cbb4b8ae396e8f1f 
	I1027 18:57:45.057882  268639 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1027 18:57:45.058154  268639 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1027 18:57:45.058270  268639 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 18:57:45.058294  268639 cni.go:84] Creating CNI manager for ""
	I1027 18:57:45.058304  268639 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 18:57:45.062021  268639 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1027 18:57:45.066557  268639 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1027 18:57:45.076571  268639 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 18:57:45.076595  268639 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1027 18:57:45.097879  268639 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 18:57:45.590455  268639 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 18:57:45.590587  268639 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:45.590669  268639 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-101592 minikube.k8s.io/updated_at=2025_10_27T18_57_45_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f minikube.k8s.io/name=addons-101592 minikube.k8s.io/primary=true
	I1027 18:57:45.750330  268639 ops.go:34] apiserver oom_adj: -16
	I1027 18:57:45.750415  268639 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:46.251072  268639 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:46.750951  268639 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:47.250529  268639 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:47.751459  268639 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:48.250517  268639 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:48.750450  268639 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:49.251013  268639 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:49.751305  268639 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:49.851016  268639 kubeadm.go:1113] duration metric: took 4.260471325s to wait for elevateKubeSystemPrivileges
	I1027 18:57:49.851042  268639 kubeadm.go:402] duration metric: took 22.679577507s to StartCluster
	I1027 18:57:49.851059  268639 settings.go:142] acquiring lock: {Name:mk49b9e2e9b7901c6b51e89b8b1e8ee7f9e88107 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:57:49.851166  268639 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 18:57:49.851549  268639 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/kubeconfig: {Name:mk0a03a594f8d9f9199b1c7cff7c550cec8414c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:57:49.851742  268639 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 18:57:49.851913  268639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 18:57:49.852153  268639 config.go:182] Loaded profile config "addons-101592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:57:49.852183  268639 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1027 18:57:49.852261  268639 addons.go:69] Setting yakd=true in profile "addons-101592"
	I1027 18:57:49.852274  268639 addons.go:238] Setting addon yakd=true in "addons-101592"
	I1027 18:57:49.852295  268639 host.go:66] Checking if "addons-101592" exists ...
	I1027 18:57:49.852764  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:49.852935  268639 addons.go:69] Setting inspektor-gadget=true in profile "addons-101592"
	I1027 18:57:49.852947  268639 addons.go:238] Setting addon inspektor-gadget=true in "addons-101592"
	I1027 18:57:49.852967  268639 host.go:66] Checking if "addons-101592" exists ...
	I1027 18:57:49.853347  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:49.854601  268639 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-101592"
	I1027 18:57:49.854632  268639 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-101592"
	I1027 18:57:49.854670  268639 host.go:66] Checking if "addons-101592" exists ...
	I1027 18:57:49.855154  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:49.855297  268639 addons.go:69] Setting metrics-server=true in profile "addons-101592"
	I1027 18:57:49.855362  268639 addons.go:238] Setting addon metrics-server=true in "addons-101592"
	I1027 18:57:49.855404  268639 host.go:66] Checking if "addons-101592" exists ...
	I1027 18:57:49.855953  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:49.865482  268639 out.go:179] * Verifying Kubernetes components...
	I1027 18:57:49.865667  268639 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-101592"
	I1027 18:57:49.865727  268639 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-101592"
	I1027 18:57:49.865745  268639 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-101592"
	I1027 18:57:49.865803  268639 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-101592"
	I1027 18:57:49.865836  268639 host.go:66] Checking if "addons-101592" exists ...
	I1027 18:57:49.865871  268639 addons.go:69] Setting registry=true in profile "addons-101592"
	I1027 18:57:49.865908  268639 addons.go:238] Setting addon registry=true in "addons-101592"
	I1027 18:57:49.866003  268639 host.go:66] Checking if "addons-101592" exists ...
	I1027 18:57:49.866405  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:49.866668  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:49.879058  268639 addons.go:69] Setting default-storageclass=true in profile "addons-101592"
	I1027 18:57:49.879095  268639 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-101592"
	I1027 18:57:49.879478  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:49.886513  268639 addons.go:69] Setting registry-creds=true in profile "addons-101592"
	I1027 18:57:49.886612  268639 addons.go:238] Setting addon registry-creds=true in "addons-101592"
	I1027 18:57:49.886685  268639 host.go:66] Checking if "addons-101592" exists ...
	I1027 18:57:49.890716  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:49.900026  268639 addons.go:69] Setting gcp-auth=true in profile "addons-101592"
	I1027 18:57:49.900071  268639 mustload.go:65] Loading cluster: addons-101592
	I1027 18:57:49.900302  268639 config.go:182] Loaded profile config "addons-101592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:57:49.900582  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:49.911610  268639 addons.go:69] Setting storage-provisioner=true in profile "addons-101592"
	I1027 18:57:49.911660  268639 addons.go:238] Setting addon storage-provisioner=true in "addons-101592"
	I1027 18:57:49.911696  268639 host.go:66] Checking if "addons-101592" exists ...
	I1027 18:57:49.912225  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:49.930419  268639 addons.go:69] Setting ingress=true in profile "addons-101592"
	I1027 18:57:49.930462  268639 addons.go:238] Setting addon ingress=true in "addons-101592"
	I1027 18:57:49.930509  268639 host.go:66] Checking if "addons-101592" exists ...
	I1027 18:57:49.935417  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:49.939066  268639 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-101592"
	I1027 18:57:49.939098  268639 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-101592"
	I1027 18:57:49.939478  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:49.955452  268639 addons.go:69] Setting ingress-dns=true in profile "addons-101592"
	I1027 18:57:49.955481  268639 addons.go:238] Setting addon ingress-dns=true in "addons-101592"
	I1027 18:57:49.955528  268639 host.go:66] Checking if "addons-101592" exists ...
	I1027 18:57:49.956051  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:49.958123  268639 addons.go:69] Setting volcano=true in profile "addons-101592"
	I1027 18:57:49.958157  268639 addons.go:238] Setting addon volcano=true in "addons-101592"
	I1027 18:57:49.958191  268639 host.go:66] Checking if "addons-101592" exists ...
	I1027 18:57:49.958638  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:49.865836  268639 host.go:66] Checking if "addons-101592" exists ...
	I1027 18:57:49.972390  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:49.975334  268639 addons.go:69] Setting volumesnapshots=true in profile "addons-101592"
	I1027 18:57:49.975364  268639 addons.go:238] Setting addon volumesnapshots=true in "addons-101592"
	I1027 18:57:49.975398  268639 host.go:66] Checking if "addons-101592" exists ...
	I1027 18:57:49.975861  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:49.865737  268639 addons.go:69] Setting cloud-spanner=true in profile "addons-101592"
	I1027 18:57:49.992826  268639 addons.go:238] Setting addon cloud-spanner=true in "addons-101592"
	I1027 18:57:49.992878  268639 host.go:66] Checking if "addons-101592" exists ...
	I1027 18:57:49.993348  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:50.015197  268639 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 18:57:50.051189  268639 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1027 18:57:50.074977  268639 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1027 18:57:50.094612  268639 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1027 18:57:50.123253  268639 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1027 18:57:50.123277  268639 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1027 18:57:50.123356  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:50.137026  268639 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1027 18:57:50.137776  268639 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1027 18:57:50.144317  268639 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1027 18:57:50.144643  268639 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1027 18:57:50.144720  268639 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1027 18:57:50.144795  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:50.168753  268639 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1027 18:57:50.168775  268639 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1027 18:57:50.168847  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:50.174639  268639 out.go:179]   - Using image docker.io/registry:3.0.0
	I1027 18:57:50.190922  268639 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1027 18:57:50.190970  268639 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1027 18:57:50.192435  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:50.199563  268639 addons.go:238] Setting addon default-storageclass=true in "addons-101592"
	I1027 18:57:50.199602  268639 host.go:66] Checking if "addons-101592" exists ...
	I1027 18:57:50.200041  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:50.224686  268639 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1027 18:57:50.225324  268639 host.go:66] Checking if "addons-101592" exists ...
	I1027 18:57:50.231778  268639 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1027 18:57:50.231805  268639 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1027 18:57:50.231873  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:50.265818  268639 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-101592"
	I1027 18:57:50.265866  268639 host.go:66] Checking if "addons-101592" exists ...
	I1027 18:57:50.266395  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:50.298205  268639 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1027 18:57:50.298275  268639 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1027 18:57:50.299562  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:57:50.300381  268639 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1027 18:57:50.301396  268639 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1027 18:57:50.301417  268639 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1027 18:57:50.301484  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	W1027 18:57:50.310566  268639 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1027 18:57:50.311068  268639 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1027 18:57:50.311085  268639 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1027 18:57:50.311145  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:50.313526  268639 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1027 18:57:50.313741  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:57:50.323197  268639 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1027 18:57:50.323418  268639 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1027 18:57:50.323432  268639 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1027 18:57:50.323498  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:50.323859  268639 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1027 18:57:50.324027  268639 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 18:57:50.324200  268639 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1027 18:57:50.331555  268639 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1027 18:57:50.335898  268639 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1027 18:57:50.335934  268639 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1027 18:57:50.335996  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:50.345195  268639 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 18:57:50.345220  268639 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 18:57:50.345342  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:50.350395  268639 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1027 18:57:50.350416  268639 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1027 18:57:50.350478  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:50.365640  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:57:50.366581  268639 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1027 18:57:50.366787  268639 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1027 18:57:50.375113  268639 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1027 18:57:50.375366  268639 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1027 18:57:50.375382  268639 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1027 18:57:50.375456  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:50.385852  268639 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1027 18:57:50.402578  268639 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1027 18:57:50.411172  268639 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1027 18:57:50.420826  268639 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1027 18:57:50.420951  268639 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1027 18:57:50.421058  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:50.427032  268639 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1027 18:57:50.431424  268639 out.go:179]   - Using image docker.io/busybox:stable
	I1027 18:57:50.438350  268639 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1027 18:57:50.438371  268639 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1027 18:57:50.438443  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:50.458441  268639 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 18:57:50.458507  268639 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 18:57:50.458606  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:50.493054  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:57:50.493833  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:57:50.494448  268639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 18:57:50.528985  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:57:50.529492  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:57:50.536762  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:57:50.560208  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:57:50.561057  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:57:50.567146  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:57:50.585638  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:57:50.596412  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:57:50.598238  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:57:50.614147  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	W1027 18:57:50.615157  268639 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1027 18:57:50.615185  268639 retry.go:31] will retry after 337.973811ms: ssh: handshake failed: EOF
	I1027 18:57:50.656616  268639 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 18:57:51.020012  268639 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1027 18:57:51.020089  268639 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1027 18:57:51.057381  268639 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1027 18:57:51.057405  268639 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1027 18:57:51.134953  268639 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1027 18:57:51.135090  268639 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1027 18:57:51.138231  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1027 18:57:51.146580  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1027 18:57:51.151600  268639 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:51.151665  268639 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1027 18:57:51.155903  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1027 18:57:51.213921  268639 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1027 18:57:51.214001  268639 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1027 18:57:51.215636  268639 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1027 18:57:51.215706  268639 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1027 18:57:51.243044  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1027 18:57:51.261610  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:51.267076  268639 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1027 18:57:51.267139  268639 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1027 18:57:51.383329  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1027 18:57:51.432641  268639 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1027 18:57:51.432662  268639 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1027 18:57:51.440786  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1027 18:57:51.450130  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 18:57:51.517945  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1027 18:57:51.550782  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 18:57:51.604373  268639 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1027 18:57:51.604443  268639 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1027 18:57:51.613547  268639 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1027 18:57:51.613618  268639 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1027 18:57:51.638785  268639 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1027 18:57:51.638859  268639 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1027 18:57:51.652944  268639 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1027 18:57:51.653019  268639 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1027 18:57:51.675977  268639 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1027 18:57:51.676050  268639 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1027 18:57:51.798010  268639 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1027 18:57:51.798087  268639 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1027 18:57:51.820964  268639 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1027 18:57:51.821031  268639 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1027 18:57:51.872349  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1027 18:57:51.885417  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1027 18:57:52.046798  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1027 18:57:52.057175  268639 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1027 18:57:52.057254  268639 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1027 18:57:52.061834  268639 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1027 18:57:52.061913  268639 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1027 18:57:52.159933  268639 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.503284675s)
	I1027 18:57:52.160099  268639 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.665628366s)
	I1027 18:57:52.160135  268639 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1027 18:57:52.161634  268639 node_ready.go:35] waiting up to 6m0s for node "addons-101592" to be "Ready" ...
	I1027 18:57:52.231389  268639 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1027 18:57:52.231410  268639 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1027 18:57:52.281477  268639 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1027 18:57:52.281496  268639 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1027 18:57:52.554845  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.416522301s)
	I1027 18:57:52.554933  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.408280587s)
	I1027 18:57:52.610533  268639 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1027 18:57:52.610559  268639 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1027 18:57:52.636009  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1027 18:57:52.695387  268639 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-101592" context rescaled to 1 replicas
	I1027 18:57:52.842398  268639 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1027 18:57:52.842418  268639 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1027 18:57:52.974014  268639 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1027 18:57:52.974091  268639 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1027 18:57:53.151597  268639 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1027 18:57:53.151668  268639 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1027 18:57:53.372795  268639 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1027 18:57:53.372818  268639 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1027 18:57:53.588666  268639 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1027 18:57:53.588690  268639 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1027 18:57:53.845143  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1027 18:57:54.179279  268639 node_ready.go:57] node "addons-101592" has "Ready":"False" status (will retry)
	I1027 18:57:56.108706  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.952713347s)
	I1027 18:57:56.108737  268639 addons.go:479] Verifying addon ingress=true in "addons-101592"
	I1027 18:57:56.109002  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.865887294s)
	I1027 18:57:56.109223  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.847550263s)
	W1027 18:57:56.109262  268639 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:56.109302  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.668500542s)
	I1027 18:57:56.109303  268639 retry.go:31] will retry after 155.902552ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:56.109331  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.659134218s)
	I1027 18:57:56.109279  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.725876533s)
	I1027 18:57:56.109454  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.591429513s)
	I1027 18:57:56.109507  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.55865899s)
	I1027 18:57:56.109582  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.237163226s)
	I1027 18:57:56.109815  268639 addons.go:479] Verifying addon metrics-server=true in "addons-101592"
	I1027 18:57:56.109636  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.224143468s)
	I1027 18:57:56.109659  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.062789496s)
	I1027 18:57:56.110031  268639 addons.go:479] Verifying addon registry=true in "addons-101592"
	I1027 18:57:56.109728  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.473693978s)
	W1027 18:57:56.111157  268639 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1027 18:57:56.111176  268639 retry.go:31] will retry after 199.373259ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1027 18:57:56.111964  268639 out.go:179] * Verifying ingress addon...
	I1027 18:57:56.114196  268639 out.go:179] * Verifying registry addon...
	I1027 18:57:56.117323  268639 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1027 18:57:56.117532  268639 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-101592 service yakd-dashboard -n yakd-dashboard
	
	I1027 18:57:56.119411  268639 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1027 18:57:56.124123  268639 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1027 18:57:56.124149  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1027 18:57:56.126227  268639 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1027 18:57:56.126614  268639 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1027 18:57:56.126631  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:56.266188  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:56.310829  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1027 18:57:56.493224  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.648020301s)
	I1027 18:57:56.493263  268639 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-101592"
	I1027 18:57:56.496053  268639 out.go:179] * Verifying csi-hostpath-driver addon...
	I1027 18:57:56.499368  268639 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1027 18:57:56.534750  268639 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1027 18:57:56.534776  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:56.621543  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:56.624697  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1027 18:57:56.664926  268639 node_ready.go:57] node "addons-101592" has "Ready":"False" status (will retry)
	I1027 18:57:57.002783  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:57.123730  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:57.124136  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:57.329650  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.063350377s)
	W1027 18:57:57.329704  268639 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:57.329743  268639 retry.go:31] will retry after 341.860927ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:57.502772  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:57.620805  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:57.622675  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:57.672677  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:57.900399  268639 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1027 18:57:57.900542  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:57.926161  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:57:58.003511  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:58.056790  268639 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1027 18:57:58.073991  268639 addons.go:238] Setting addon gcp-auth=true in "addons-101592"
	I1027 18:57:58.074048  268639 host.go:66] Checking if "addons-101592" exists ...
	I1027 18:57:58.074544  268639 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:57:58.098740  268639 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1027 18:57:58.098815  268639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:57:58.121976  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:58.126933  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:58.132688  268639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:57:58.502889  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:58.621391  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:58.623634  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1027 18:57:58.665530  268639 node_ready.go:57] node "addons-101592" has "Ready":"False" status (will retry)
	I1027 18:57:59.003712  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:59.123120  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:59.125318  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:59.200402  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.88947276s)
	I1027 18:57:59.200501  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.527790412s)
	W1027 18:57:59.200535  268639 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:59.200560  268639 retry.go:31] will retry after 568.935121ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:59.200599  268639 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.101834853s)
	I1027 18:57:59.203683  268639 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1027 18:57:59.206525  268639 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1027 18:57:59.209442  268639 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1027 18:57:59.209469  268639 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1027 18:57:59.223910  268639 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1027 18:57:59.223940  268639 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1027 18:57:59.237159  268639 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1027 18:57:59.237179  268639 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1027 18:57:59.249272  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1027 18:57:59.502307  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:59.625480  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:59.626352  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:59.723968  268639 addons.go:479] Verifying addon gcp-auth=true in "addons-101592"
	I1027 18:57:59.729102  268639 out.go:179] * Verifying gcp-auth addon...
	I1027 18:57:59.733641  268639 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1027 18:57:59.743566  268639 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1027 18:57:59.743590  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:59.769979  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:58:00.002978  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:00.138145  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:00.138582  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:00.246619  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:00.504307  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:00.621295  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:00.635834  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:00.736803  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:00.848000  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.077978344s)
	W1027 18:58:00.848037  268639 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:00.848083  268639 retry.go:31] will retry after 1.036199785s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:01.003213  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:01.120886  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:01.123264  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1027 18:58:01.165604  268639 node_ready.go:57] node "addons-101592" has "Ready":"False" status (will retry)
	I1027 18:58:01.236717  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:01.503030  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:01.621833  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:01.623034  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:01.737369  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:01.884511  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:58:02.003082  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:02.122460  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:02.123316  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:02.237544  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:02.502973  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:02.628206  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:02.628751  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1027 18:58:02.716973  268639 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:02.717005  268639 retry.go:31] will retry after 703.670259ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:02.736829  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:03.002927  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:03.121192  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:03.122936  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1027 18:58:03.166197  268639 node_ready.go:57] node "addons-101592" has "Ready":"False" status (will retry)
	I1027 18:58:03.237580  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:03.421825  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:58:03.503072  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:03.622054  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:03.622297  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:03.737650  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:04.002104  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:04.120737  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:04.122472  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1027 18:58:04.229983  268639 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:04.230014  268639 retry.go:31] will retry after 2.538197086s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:04.236611  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:04.502921  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:04.620985  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:04.622680  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:04.737220  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:05.003361  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:05.122515  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:05.122642  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:05.237040  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:05.503251  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:05.621242  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:05.622115  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1027 18:58:05.665629  268639 node_ready.go:57] node "addons-101592" has "Ready":"False" status (will retry)
	I1027 18:58:05.737205  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:06.002228  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:06.122018  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:06.122323  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:06.241014  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:06.502688  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:06.621179  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:06.622509  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:06.737091  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:06.769211  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:58:07.002423  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:07.121187  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:07.123426  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:07.237075  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:07.503217  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1027 18:58:07.596508  268639 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:07.596541  268639 retry.go:31] will retry after 3.031116829s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:07.620193  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:07.621992  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:07.736398  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:08.002770  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:08.121649  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:08.123804  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1027 18:58:08.164641  268639 node_ready.go:57] node "addons-101592" has "Ready":"False" status (will retry)
	I1027 18:58:08.237472  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:08.502810  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:08.620556  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:08.623053  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:08.736743  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:09.002657  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:09.120611  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:09.122276  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:09.236568  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:09.502567  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:09.620785  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:09.622539  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:09.737255  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:10.003211  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:10.123495  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:10.124742  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1027 18:58:10.164728  268639 node_ready.go:57] node "addons-101592" has "Ready":"False" status (will retry)
	I1027 18:58:10.236542  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:10.502671  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:10.620715  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:10.622290  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:10.628398  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:58:10.737840  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:11.003290  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:11.121776  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:11.126232  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:11.240634  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 18:58:11.447584  268639 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:11.447615  268639 retry.go:31] will retry after 5.012638589s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:11.502238  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:11.620080  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:11.621708  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:11.737027  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:12.004209  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:12.120990  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:12.123373  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1027 18:58:12.165076  268639 node_ready.go:57] node "addons-101592" has "Ready":"False" status (will retry)
	I1027 18:58:12.237273  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:12.503049  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:12.621261  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:12.622661  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:12.737654  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:13.002421  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:13.120314  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:13.123688  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:13.236426  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:13.502485  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:13.621452  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:13.622278  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:13.737073  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:14.002235  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:14.121651  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:14.121818  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:14.236527  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:14.502413  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:14.620483  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:14.622439  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1027 18:58:14.665055  268639 node_ready.go:57] node "addons-101592" has "Ready":"False" status (will retry)
	I1027 18:58:14.736857  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:15.002944  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:15.122613  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:15.124155  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:15.236383  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:15.502437  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:15.622244  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:15.622610  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:15.737077  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:16.003025  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:16.121194  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:16.122134  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:16.237573  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:16.461043  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:58:16.504105  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:16.623195  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:16.624247  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1027 18:58:16.665629  268639 node_ready.go:57] node "addons-101592" has "Ready":"False" status (will retry)
	I1027 18:58:16.736869  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:17.003116  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:17.122050  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:17.123397  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:17.237481  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 18:58:17.278221  268639 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:17.278257  268639 retry.go:31] will retry after 9.258063329s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:17.502304  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:17.620190  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:17.622224  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:17.737119  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:18.003032  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:18.120877  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:18.122807  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:18.237310  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:18.502438  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:18.620642  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:18.622737  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:18.737384  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:19.003004  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:19.120896  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:19.122698  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1027 18:58:19.164560  268639 node_ready.go:57] node "addons-101592" has "Ready":"False" status (will retry)
	I1027 18:58:19.237536  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:19.502913  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:19.620750  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:19.622300  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:19.736958  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:20.003290  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:20.121163  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:20.122774  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:20.237453  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:20.502468  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:20.620269  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:20.622062  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:20.736697  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:21.002775  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:21.121114  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:21.123183  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1027 18:58:21.165318  268639 node_ready.go:57] node "addons-101592" has "Ready":"False" status (will retry)
	I1027 18:58:21.236485  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:21.502700  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:21.621235  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:21.622270  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:21.736704  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:22.002387  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:22.120355  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:22.122566  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:22.236834  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:22.502597  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:22.620639  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:22.622731  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:22.736609  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:23.002888  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:23.120785  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:23.122609  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:23.237516  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:23.502471  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:23.621505  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:23.623547  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1027 18:58:23.665626  268639 node_ready.go:57] node "addons-101592" has "Ready":"False" status (will retry)
	I1027 18:58:23.737433  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:24.002510  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:24.120482  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:24.122583  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:24.237570  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:24.502560  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:24.620728  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:24.622554  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:24.737299  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:25.002205  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:25.121096  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:25.121940  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:25.236306  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:25.502055  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:25.621145  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:25.621987  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:25.738148  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:26.002916  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:26.120821  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:26.122758  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1027 18:58:26.164484  268639 node_ready.go:57] node "addons-101592" has "Ready":"False" status (will retry)
	I1027 18:58:26.241763  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:26.503079  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:26.537138  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:58:26.620539  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:26.622329  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:26.736945  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:27.002600  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:27.122595  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:27.122869  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:27.238476  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 18:58:27.369787  268639 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:27.369822  268639 retry.go:31] will retry after 9.860564125s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:27.502798  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:27.621763  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:27.621830  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:27.736645  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:28.002350  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:28.120828  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:28.122852  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1027 18:58:28.164551  268639 node_ready.go:57] node "addons-101592" has "Ready":"False" status (will retry)
	I1027 18:58:28.237777  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:28.502839  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:28.621479  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:28.622824  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:28.737210  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:29.002258  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:29.120247  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:29.122397  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:29.236899  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:29.502735  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:29.621303  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:29.622494  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:29.737120  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:30.002922  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:30.121865  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:30.123748  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1027 18:58:30.165797  268639 node_ready.go:57] node "addons-101592" has "Ready":"False" status (will retry)
	I1027 18:58:30.236863  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:30.503002  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:30.621048  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:30.623267  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:30.736881  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:31.003580  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:31.122545  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:31.123435  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:31.237282  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:31.502367  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:31.620328  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:31.622354  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:31.737108  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:32.017695  268639 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1027 18:58:32.017726  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:32.271154  268639 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1027 18:58:32.271220  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:32.272041  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:32.272537  268639 node_ready.go:49] node "addons-101592" is "Ready"
	I1027 18:58:32.272590  268639 node_ready.go:38] duration metric: took 40.110888336s for node "addons-101592" to be "Ready" ...
	I1027 18:58:32.272619  268639 api_server.go:52] waiting for apiserver process to appear ...
	I1027 18:58:32.272707  268639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 18:58:32.292157  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:32.304997  268639 api_server.go:72] duration metric: took 42.453227168s to wait for apiserver process to appear ...
	I1027 18:58:32.305066  268639 api_server.go:88] waiting for apiserver healthz status ...
	I1027 18:58:32.305103  268639 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1027 18:58:32.353369  268639 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1027 18:58:32.354965  268639 api_server.go:141] control plane version: v1.34.1
	I1027 18:58:32.355051  268639 api_server.go:131] duration metric: took 49.96197ms to wait for apiserver health ...
	I1027 18:58:32.355074  268639 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 18:58:32.371510  268639 system_pods.go:59] 19 kube-system pods found
	I1027 18:58:32.371599  268639 system_pods.go:61] "coredns-66bc5c9577-kbgn5" [1991cfb9-3c4d-4617-bbae-e9323fb13c40] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 18:58:32.371623  268639 system_pods.go:61] "csi-hostpath-attacher-0" [51a4bf95-11c1-4dfb-a24d-3b612848e249] Pending
	I1027 18:58:32.371663  268639 system_pods.go:61] "csi-hostpath-resizer-0" [030a7823-ab0b-438e-a310-b5500db7434c] Pending
	I1027 18:58:32.371691  268639 system_pods.go:61] "csi-hostpathplugin-42bzh" [dfa2ad2c-a861-4699-851d-80b10c2a11b4] Pending
	I1027 18:58:32.371716  268639 system_pods.go:61] "etcd-addons-101592" [22ba28c5-70eb-44bb-bc1c-99fdfacd3b07] Running
	I1027 18:58:32.371742  268639 system_pods.go:61] "kindnet-87t7g" [2787c598-3064-4a7c-89a0-28d581264bc7] Running
	I1027 18:58:32.371774  268639 system_pods.go:61] "kube-apiserver-addons-101592" [24a55c4e-9f3b-416d-bfed-a9a74b7b1210] Running
	I1027 18:58:32.371801  268639 system_pods.go:61] "kube-controller-manager-addons-101592" [bd4d474e-49ea-4c7d-97a1-71dfcdae46c8] Running
	I1027 18:58:32.371827  268639 system_pods.go:61] "kube-ingress-dns-minikube" [5be647b4-394d-4537-a6b1-82da5122c552] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1027 18:58:32.371850  268639 system_pods.go:61] "kube-proxy-k9g92" [e9916ac3-9031-44fd-a0ed-d6e5647234b6] Running
	I1027 18:58:32.371887  268639 system_pods.go:61] "kube-scheduler-addons-101592" [cff17c74-74e7-40a5-9cf6-91c4fb0684fd] Running
	I1027 18:58:32.371916  268639 system_pods.go:61] "metrics-server-85b7d694d7-mmqw2" [38c42c1b-338f-4857-a519-d3a5ae92f76f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 18:58:32.371939  268639 system_pods.go:61] "nvidia-device-plugin-daemonset-sghjb" [4b33da98-82a0-4a8e-aa0c-c6cf4878e558] Pending
	I1027 18:58:32.371960  268639 system_pods.go:61] "registry-6b586f9694-jvgtv" [590c8e4c-843c-4795-a036-d9969aaa1662] Pending
	I1027 18:58:32.371995  268639 system_pods.go:61] "registry-creds-764b6fb674-fz96k" [aca5b0e8-8150-49b6-a3e8-536f93b6d0fe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 18:58:32.372022  268639 system_pods.go:61] "registry-proxy-k87sb" [459056d8-2856-482a-8f3a-86430bc52e0e] Pending
	I1027 18:58:32.372051  268639 system_pods.go:61] "snapshot-controller-7d9fbc56b8-pqsgt" [c0a49fc2-a5cb-4b90-8e22-c6bfba73b976] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:58:32.372078  268639 system_pods.go:61] "snapshot-controller-7d9fbc56b8-pvz49" [abc88a48-3d78-49ca-9bbc-a47d33c52228] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:58:32.372113  268639 system_pods.go:61] "storage-provisioner" [dcd0674c-1f9d-42a7-acf7-06c97b686cf1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 18:58:32.372146  268639 system_pods.go:74] duration metric: took 17.048464ms to wait for pod list to return data ...
	I1027 18:58:32.372171  268639 default_sa.go:34] waiting for default service account to be created ...
	I1027 18:58:32.380679  268639 default_sa.go:45] found service account: "default"
	I1027 18:58:32.380758  268639 default_sa.go:55] duration metric: took 8.564235ms for default service account to be created ...
	I1027 18:58:32.380785  268639 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 18:58:32.404004  268639 system_pods.go:86] 19 kube-system pods found
	I1027 18:58:32.404093  268639 system_pods.go:89] "coredns-66bc5c9577-kbgn5" [1991cfb9-3c4d-4617-bbae-e9323fb13c40] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 18:58:32.404120  268639 system_pods.go:89] "csi-hostpath-attacher-0" [51a4bf95-11c1-4dfb-a24d-3b612848e249] Pending
	I1027 18:58:32.404157  268639 system_pods.go:89] "csi-hostpath-resizer-0" [030a7823-ab0b-438e-a310-b5500db7434c] Pending
	I1027 18:58:32.404183  268639 system_pods.go:89] "csi-hostpathplugin-42bzh" [dfa2ad2c-a861-4699-851d-80b10c2a11b4] Pending
	I1027 18:58:32.404208  268639 system_pods.go:89] "etcd-addons-101592" [22ba28c5-70eb-44bb-bc1c-99fdfacd3b07] Running
	I1027 18:58:32.404235  268639 system_pods.go:89] "kindnet-87t7g" [2787c598-3064-4a7c-89a0-28d581264bc7] Running
	I1027 18:58:32.404269  268639 system_pods.go:89] "kube-apiserver-addons-101592" [24a55c4e-9f3b-416d-bfed-a9a74b7b1210] Running
	I1027 18:58:32.404300  268639 system_pods.go:89] "kube-controller-manager-addons-101592" [bd4d474e-49ea-4c7d-97a1-71dfcdae46c8] Running
	I1027 18:58:32.404325  268639 system_pods.go:89] "kube-ingress-dns-minikube" [5be647b4-394d-4537-a6b1-82da5122c552] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1027 18:58:32.404387  268639 system_pods.go:89] "kube-proxy-k9g92" [e9916ac3-9031-44fd-a0ed-d6e5647234b6] Running
	I1027 18:58:32.404417  268639 system_pods.go:89] "kube-scheduler-addons-101592" [cff17c74-74e7-40a5-9cf6-91c4fb0684fd] Running
	I1027 18:58:32.404441  268639 system_pods.go:89] "metrics-server-85b7d694d7-mmqw2" [38c42c1b-338f-4857-a519-d3a5ae92f76f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 18:58:32.404463  268639 system_pods.go:89] "nvidia-device-plugin-daemonset-sghjb" [4b33da98-82a0-4a8e-aa0c-c6cf4878e558] Pending
	I1027 18:58:32.404500  268639 system_pods.go:89] "registry-6b586f9694-jvgtv" [590c8e4c-843c-4795-a036-d9969aaa1662] Pending
	I1027 18:58:32.404526  268639 system_pods.go:89] "registry-creds-764b6fb674-fz96k" [aca5b0e8-8150-49b6-a3e8-536f93b6d0fe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 18:58:32.404548  268639 system_pods.go:89] "registry-proxy-k87sb" [459056d8-2856-482a-8f3a-86430bc52e0e] Pending
	I1027 18:58:32.404572  268639 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pqsgt" [c0a49fc2-a5cb-4b90-8e22-c6bfba73b976] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:58:32.404613  268639 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pvz49" [abc88a48-3d78-49ca-9bbc-a47d33c52228] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:58:32.404642  268639 system_pods.go:89] "storage-provisioner" [dcd0674c-1f9d-42a7-acf7-06c97b686cf1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 18:58:32.404675  268639 retry.go:31] will retry after 275.776227ms: missing components: kube-dns
	I1027 18:58:32.518879  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:32.648339  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:32.648525  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:32.689679  268639 system_pods.go:86] 19 kube-system pods found
	I1027 18:58:32.689770  268639 system_pods.go:89] "coredns-66bc5c9577-kbgn5" [1991cfb9-3c4d-4617-bbae-e9323fb13c40] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 18:58:32.689796  268639 system_pods.go:89] "csi-hostpath-attacher-0" [51a4bf95-11c1-4dfb-a24d-3b612848e249] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1027 18:58:32.689839  268639 system_pods.go:89] "csi-hostpath-resizer-0" [030a7823-ab0b-438e-a310-b5500db7434c] Pending
	I1027 18:58:32.689870  268639 system_pods.go:89] "csi-hostpathplugin-42bzh" [dfa2ad2c-a861-4699-851d-80b10c2a11b4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1027 18:58:32.689892  268639 system_pods.go:89] "etcd-addons-101592" [22ba28c5-70eb-44bb-bc1c-99fdfacd3b07] Running
	I1027 18:58:32.689916  268639 system_pods.go:89] "kindnet-87t7g" [2787c598-3064-4a7c-89a0-28d581264bc7] Running
	I1027 18:58:32.689950  268639 system_pods.go:89] "kube-apiserver-addons-101592" [24a55c4e-9f3b-416d-bfed-a9a74b7b1210] Running
	I1027 18:58:32.689979  268639 system_pods.go:89] "kube-controller-manager-addons-101592" [bd4d474e-49ea-4c7d-97a1-71dfcdae46c8] Running
	I1027 18:58:32.690005  268639 system_pods.go:89] "kube-ingress-dns-minikube" [5be647b4-394d-4537-a6b1-82da5122c552] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1027 18:58:32.690026  268639 system_pods.go:89] "kube-proxy-k9g92" [e9916ac3-9031-44fd-a0ed-d6e5647234b6] Running
	I1027 18:58:32.690062  268639 system_pods.go:89] "kube-scheduler-addons-101592" [cff17c74-74e7-40a5-9cf6-91c4fb0684fd] Running
	I1027 18:58:32.690092  268639 system_pods.go:89] "metrics-server-85b7d694d7-mmqw2" [38c42c1b-338f-4857-a519-d3a5ae92f76f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 18:58:32.690121  268639 system_pods.go:89] "nvidia-device-plugin-daemonset-sghjb" [4b33da98-82a0-4a8e-aa0c-c6cf4878e558] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1027 18:58:32.690171  268639 system_pods.go:89] "registry-6b586f9694-jvgtv" [590c8e4c-843c-4795-a036-d9969aaa1662] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1027 18:58:32.690199  268639 system_pods.go:89] "registry-creds-764b6fb674-fz96k" [aca5b0e8-8150-49b6-a3e8-536f93b6d0fe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 18:58:32.690222  268639 system_pods.go:89] "registry-proxy-k87sb" [459056d8-2856-482a-8f3a-86430bc52e0e] Pending
	I1027 18:58:32.690246  268639 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pqsgt" [c0a49fc2-a5cb-4b90-8e22-c6bfba73b976] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:58:32.690281  268639 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pvz49" [abc88a48-3d78-49ca-9bbc-a47d33c52228] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:58:32.690315  268639 system_pods.go:89] "storage-provisioner" [dcd0674c-1f9d-42a7-acf7-06c97b686cf1] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 18:58:32.690350  268639 retry.go:31] will retry after 352.092ms: missing components: kube-dns
	I1027 18:58:32.740054  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:33.002624  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:33.047909  268639 system_pods.go:86] 19 kube-system pods found
	I1027 18:58:33.048009  268639 system_pods.go:89] "coredns-66bc5c9577-kbgn5" [1991cfb9-3c4d-4617-bbae-e9323fb13c40] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 18:58:33.048027  268639 system_pods.go:89] "csi-hostpath-attacher-0" [51a4bf95-11c1-4dfb-a24d-3b612848e249] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1027 18:58:33.048036  268639 system_pods.go:89] "csi-hostpath-resizer-0" [030a7823-ab0b-438e-a310-b5500db7434c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1027 18:58:33.048069  268639 system_pods.go:89] "csi-hostpathplugin-42bzh" [dfa2ad2c-a861-4699-851d-80b10c2a11b4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1027 18:58:33.048083  268639 system_pods.go:89] "etcd-addons-101592" [22ba28c5-70eb-44bb-bc1c-99fdfacd3b07] Running
	I1027 18:58:33.048091  268639 system_pods.go:89] "kindnet-87t7g" [2787c598-3064-4a7c-89a0-28d581264bc7] Running
	I1027 18:58:33.048096  268639 system_pods.go:89] "kube-apiserver-addons-101592" [24a55c4e-9f3b-416d-bfed-a9a74b7b1210] Running
	I1027 18:58:33.048107  268639 system_pods.go:89] "kube-controller-manager-addons-101592" [bd4d474e-49ea-4c7d-97a1-71dfcdae46c8] Running
	I1027 18:58:33.048114  268639 system_pods.go:89] "kube-ingress-dns-minikube" [5be647b4-394d-4537-a6b1-82da5122c552] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1027 18:58:33.048119  268639 system_pods.go:89] "kube-proxy-k9g92" [e9916ac3-9031-44fd-a0ed-d6e5647234b6] Running
	I1027 18:58:33.048143  268639 system_pods.go:89] "kube-scheduler-addons-101592" [cff17c74-74e7-40a5-9cf6-91c4fb0684fd] Running
	I1027 18:58:33.048161  268639 system_pods.go:89] "metrics-server-85b7d694d7-mmqw2" [38c42c1b-338f-4857-a519-d3a5ae92f76f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 18:58:33.048168  268639 system_pods.go:89] "nvidia-device-plugin-daemonset-sghjb" [4b33da98-82a0-4a8e-aa0c-c6cf4878e558] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1027 18:58:33.048181  268639 system_pods.go:89] "registry-6b586f9694-jvgtv" [590c8e4c-843c-4795-a036-d9969aaa1662] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1027 18:58:33.048189  268639 system_pods.go:89] "registry-creds-764b6fb674-fz96k" [aca5b0e8-8150-49b6-a3e8-536f93b6d0fe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 18:58:33.048197  268639 system_pods.go:89] "registry-proxy-k87sb" [459056d8-2856-482a-8f3a-86430bc52e0e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1027 18:58:33.048203  268639 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pqsgt" [c0a49fc2-a5cb-4b90-8e22-c6bfba73b976] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:58:33.048227  268639 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pvz49" [abc88a48-3d78-49ca-9bbc-a47d33c52228] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:58:33.048241  268639 system_pods.go:89] "storage-provisioner" [dcd0674c-1f9d-42a7-acf7-06c97b686cf1] Running
	I1027 18:58:33.048268  268639 retry.go:31] will retry after 469.239154ms: missing components: kube-dns
	I1027 18:58:33.123157  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:33.123329  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:33.238704  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:33.503304  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:33.522377  268639 system_pods.go:86] 19 kube-system pods found
	I1027 18:58:33.522413  268639 system_pods.go:89] "coredns-66bc5c9577-kbgn5" [1991cfb9-3c4d-4617-bbae-e9323fb13c40] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 18:58:33.522424  268639 system_pods.go:89] "csi-hostpath-attacher-0" [51a4bf95-11c1-4dfb-a24d-3b612848e249] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1027 18:58:33.522431  268639 system_pods.go:89] "csi-hostpath-resizer-0" [030a7823-ab0b-438e-a310-b5500db7434c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1027 18:58:33.522437  268639 system_pods.go:89] "csi-hostpathplugin-42bzh" [dfa2ad2c-a861-4699-851d-80b10c2a11b4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1027 18:58:33.522444  268639 system_pods.go:89] "etcd-addons-101592" [22ba28c5-70eb-44bb-bc1c-99fdfacd3b07] Running
	I1027 18:58:33.522450  268639 system_pods.go:89] "kindnet-87t7g" [2787c598-3064-4a7c-89a0-28d581264bc7] Running
	I1027 18:58:33.522454  268639 system_pods.go:89] "kube-apiserver-addons-101592" [24a55c4e-9f3b-416d-bfed-a9a74b7b1210] Running
	I1027 18:58:33.522459  268639 system_pods.go:89] "kube-controller-manager-addons-101592" [bd4d474e-49ea-4c7d-97a1-71dfcdae46c8] Running
	I1027 18:58:33.522465  268639 system_pods.go:89] "kube-ingress-dns-minikube" [5be647b4-394d-4537-a6b1-82da5122c552] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1027 18:58:33.522478  268639 system_pods.go:89] "kube-proxy-k9g92" [e9916ac3-9031-44fd-a0ed-d6e5647234b6] Running
	I1027 18:58:33.522482  268639 system_pods.go:89] "kube-scheduler-addons-101592" [cff17c74-74e7-40a5-9cf6-91c4fb0684fd] Running
	I1027 18:58:33.522489  268639 system_pods.go:89] "metrics-server-85b7d694d7-mmqw2" [38c42c1b-338f-4857-a519-d3a5ae92f76f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 18:58:33.522504  268639 system_pods.go:89] "nvidia-device-plugin-daemonset-sghjb" [4b33da98-82a0-4a8e-aa0c-c6cf4878e558] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1027 18:58:33.522510  268639 system_pods.go:89] "registry-6b586f9694-jvgtv" [590c8e4c-843c-4795-a036-d9969aaa1662] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1027 18:58:33.522516  268639 system_pods.go:89] "registry-creds-764b6fb674-fz96k" [aca5b0e8-8150-49b6-a3e8-536f93b6d0fe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 18:58:33.522528  268639 system_pods.go:89] "registry-proxy-k87sb" [459056d8-2856-482a-8f3a-86430bc52e0e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1027 18:58:33.522534  268639 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pqsgt" [c0a49fc2-a5cb-4b90-8e22-c6bfba73b976] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:58:33.522541  268639 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pvz49" [abc88a48-3d78-49ca-9bbc-a47d33c52228] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:58:33.522548  268639 system_pods.go:89] "storage-provisioner" [dcd0674c-1f9d-42a7-acf7-06c97b686cf1] Running
	I1027 18:58:33.522564  268639 retry.go:31] will retry after 449.494258ms: missing components: kube-dns
	I1027 18:58:33.621134  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:33.623101  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:33.737453  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:33.977517  268639 system_pods.go:86] 19 kube-system pods found
	I1027 18:58:33.977554  268639 system_pods.go:89] "coredns-66bc5c9577-kbgn5" [1991cfb9-3c4d-4617-bbae-e9323fb13c40] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 18:58:33.977563  268639 system_pods.go:89] "csi-hostpath-attacher-0" [51a4bf95-11c1-4dfb-a24d-3b612848e249] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1027 18:58:33.977570  268639 system_pods.go:89] "csi-hostpath-resizer-0" [030a7823-ab0b-438e-a310-b5500db7434c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1027 18:58:33.977576  268639 system_pods.go:89] "csi-hostpathplugin-42bzh" [dfa2ad2c-a861-4699-851d-80b10c2a11b4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1027 18:58:33.977581  268639 system_pods.go:89] "etcd-addons-101592" [22ba28c5-70eb-44bb-bc1c-99fdfacd3b07] Running
	I1027 18:58:33.977586  268639 system_pods.go:89] "kindnet-87t7g" [2787c598-3064-4a7c-89a0-28d581264bc7] Running
	I1027 18:58:33.977590  268639 system_pods.go:89] "kube-apiserver-addons-101592" [24a55c4e-9f3b-416d-bfed-a9a74b7b1210] Running
	I1027 18:58:33.977595  268639 system_pods.go:89] "kube-controller-manager-addons-101592" [bd4d474e-49ea-4c7d-97a1-71dfcdae46c8] Running
	I1027 18:58:33.977607  268639 system_pods.go:89] "kube-ingress-dns-minikube" [5be647b4-394d-4537-a6b1-82da5122c552] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1027 18:58:33.977616  268639 system_pods.go:89] "kube-proxy-k9g92" [e9916ac3-9031-44fd-a0ed-d6e5647234b6] Running
	I1027 18:58:33.977621  268639 system_pods.go:89] "kube-scheduler-addons-101592" [cff17c74-74e7-40a5-9cf6-91c4fb0684fd] Running
	I1027 18:58:33.977629  268639 system_pods.go:89] "metrics-server-85b7d694d7-mmqw2" [38c42c1b-338f-4857-a519-d3a5ae92f76f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 18:58:33.977646  268639 system_pods.go:89] "nvidia-device-plugin-daemonset-sghjb" [4b33da98-82a0-4a8e-aa0c-c6cf4878e558] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1027 18:58:33.977660  268639 system_pods.go:89] "registry-6b586f9694-jvgtv" [590c8e4c-843c-4795-a036-d9969aaa1662] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1027 18:58:33.977666  268639 system_pods.go:89] "registry-creds-764b6fb674-fz96k" [aca5b0e8-8150-49b6-a3e8-536f93b6d0fe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 18:58:33.977673  268639 system_pods.go:89] "registry-proxy-k87sb" [459056d8-2856-482a-8f3a-86430bc52e0e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1027 18:58:33.977682  268639 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pqsgt" [c0a49fc2-a5cb-4b90-8e22-c6bfba73b976] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:58:33.977688  268639 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pvz49" [abc88a48-3d78-49ca-9bbc-a47d33c52228] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:58:33.977696  268639 system_pods.go:89] "storage-provisioner" [dcd0674c-1f9d-42a7-acf7-06c97b686cf1] Running
	I1027 18:58:33.977710  268639 retry.go:31] will retry after 516.588235ms: missing components: kube-dns
	I1027 18:58:34.002832  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:34.123184  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:34.123529  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:34.238121  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:34.500420  268639 system_pods.go:86] 19 kube-system pods found
	I1027 18:58:34.500514  268639 system_pods.go:89] "coredns-66bc5c9577-kbgn5" [1991cfb9-3c4d-4617-bbae-e9323fb13c40] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 18:58:34.500540  268639 system_pods.go:89] "csi-hostpath-attacher-0" [51a4bf95-11c1-4dfb-a24d-3b612848e249] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1027 18:58:34.500591  268639 system_pods.go:89] "csi-hostpath-resizer-0" [030a7823-ab0b-438e-a310-b5500db7434c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1027 18:58:34.500627  268639 system_pods.go:89] "csi-hostpathplugin-42bzh" [dfa2ad2c-a861-4699-851d-80b10c2a11b4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1027 18:58:34.500665  268639 system_pods.go:89] "etcd-addons-101592" [22ba28c5-70eb-44bb-bc1c-99fdfacd3b07] Running
	I1027 18:58:34.500696  268639 system_pods.go:89] "kindnet-87t7g" [2787c598-3064-4a7c-89a0-28d581264bc7] Running
	I1027 18:58:34.500719  268639 system_pods.go:89] "kube-apiserver-addons-101592" [24a55c4e-9f3b-416d-bfed-a9a74b7b1210] Running
	I1027 18:58:34.500757  268639 system_pods.go:89] "kube-controller-manager-addons-101592" [bd4d474e-49ea-4c7d-97a1-71dfcdae46c8] Running
	I1027 18:58:34.500785  268639 system_pods.go:89] "kube-ingress-dns-minikube" [5be647b4-394d-4537-a6b1-82da5122c552] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1027 18:58:34.500806  268639 system_pods.go:89] "kube-proxy-k9g92" [e9916ac3-9031-44fd-a0ed-d6e5647234b6] Running
	I1027 18:58:34.500846  268639 system_pods.go:89] "kube-scheduler-addons-101592" [cff17c74-74e7-40a5-9cf6-91c4fb0684fd] Running
	I1027 18:58:34.500873  268639 system_pods.go:89] "metrics-server-85b7d694d7-mmqw2" [38c42c1b-338f-4857-a519-d3a5ae92f76f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 18:58:34.500903  268639 system_pods.go:89] "nvidia-device-plugin-daemonset-sghjb" [4b33da98-82a0-4a8e-aa0c-c6cf4878e558] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1027 18:58:34.500936  268639 system_pods.go:89] "registry-6b586f9694-jvgtv" [590c8e4c-843c-4795-a036-d9969aaa1662] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1027 18:58:34.500964  268639 system_pods.go:89] "registry-creds-764b6fb674-fz96k" [aca5b0e8-8150-49b6-a3e8-536f93b6d0fe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 18:58:34.500986  268639 system_pods.go:89] "registry-proxy-k87sb" [459056d8-2856-482a-8f3a-86430bc52e0e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1027 18:58:34.501024  268639 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pqsgt" [c0a49fc2-a5cb-4b90-8e22-c6bfba73b976] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:58:34.501051  268639 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pvz49" [abc88a48-3d78-49ca-9bbc-a47d33c52228] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:58:34.501071  268639 system_pods.go:89] "storage-provisioner" [dcd0674c-1f9d-42a7-acf7-06c97b686cf1] Running
	I1027 18:58:34.501119  268639 retry.go:31] will retry after 573.759707ms: missing components: kube-dns
	I1027 18:58:34.505628  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:34.625365  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:34.647287  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:34.740252  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:35.006570  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:35.084254  268639 system_pods.go:86] 19 kube-system pods found
	I1027 18:58:35.084345  268639 system_pods.go:89] "coredns-66bc5c9577-kbgn5" [1991cfb9-3c4d-4617-bbae-e9323fb13c40] Running
	I1027 18:58:35.084379  268639 system_pods.go:89] "csi-hostpath-attacher-0" [51a4bf95-11c1-4dfb-a24d-3b612848e249] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1027 18:58:35.084433  268639 system_pods.go:89] "csi-hostpath-resizer-0" [030a7823-ab0b-438e-a310-b5500db7434c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1027 18:58:35.084473  268639 system_pods.go:89] "csi-hostpathplugin-42bzh" [dfa2ad2c-a861-4699-851d-80b10c2a11b4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1027 18:58:35.084494  268639 system_pods.go:89] "etcd-addons-101592" [22ba28c5-70eb-44bb-bc1c-99fdfacd3b07] Running
	I1027 18:58:35.084516  268639 system_pods.go:89] "kindnet-87t7g" [2787c598-3064-4a7c-89a0-28d581264bc7] Running
	I1027 18:58:35.084548  268639 system_pods.go:89] "kube-apiserver-addons-101592" [24a55c4e-9f3b-416d-bfed-a9a74b7b1210] Running
	I1027 18:58:35.084572  268639 system_pods.go:89] "kube-controller-manager-addons-101592" [bd4d474e-49ea-4c7d-97a1-71dfcdae46c8] Running
	I1027 18:58:35.084605  268639 system_pods.go:89] "kube-ingress-dns-minikube" [5be647b4-394d-4537-a6b1-82da5122c552] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1027 18:58:35.084632  268639 system_pods.go:89] "kube-proxy-k9g92" [e9916ac3-9031-44fd-a0ed-d6e5647234b6] Running
	I1027 18:58:35.084661  268639 system_pods.go:89] "kube-scheduler-addons-101592" [cff17c74-74e7-40a5-9cf6-91c4fb0684fd] Running
	I1027 18:58:35.084689  268639 system_pods.go:89] "metrics-server-85b7d694d7-mmqw2" [38c42c1b-338f-4857-a519-d3a5ae92f76f] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 18:58:35.084714  268639 system_pods.go:89] "nvidia-device-plugin-daemonset-sghjb" [4b33da98-82a0-4a8e-aa0c-c6cf4878e558] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1027 18:58:35.084738  268639 system_pods.go:89] "registry-6b586f9694-jvgtv" [590c8e4c-843c-4795-a036-d9969aaa1662] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1027 18:58:35.084773  268639 system_pods.go:89] "registry-creds-764b6fb674-fz96k" [aca5b0e8-8150-49b6-a3e8-536f93b6d0fe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 18:58:35.084799  268639 system_pods.go:89] "registry-proxy-k87sb" [459056d8-2856-482a-8f3a-86430bc52e0e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1027 18:58:35.084823  268639 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pqsgt" [c0a49fc2-a5cb-4b90-8e22-c6bfba73b976] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:58:35.084880  268639 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pvz49" [abc88a48-3d78-49ca-9bbc-a47d33c52228] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:58:35.084909  268639 system_pods.go:89] "storage-provisioner" [dcd0674c-1f9d-42a7-acf7-06c97b686cf1] Running
	I1027 18:58:35.084938  268639 system_pods.go:126] duration metric: took 2.704131163s to wait for k8s-apps to be running ...
	I1027 18:58:35.084961  268639 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 18:58:35.085073  268639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 18:58:35.104105  268639 system_svc.go:56] duration metric: took 19.135873ms WaitForService to wait for kubelet
	I1027 18:58:35.104188  268639 kubeadm.go:586] duration metric: took 45.252422244s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 18:58:35.104242  268639 node_conditions.go:102] verifying NodePressure condition ...
	I1027 18:58:35.108316  268639 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 18:58:35.108396  268639 node_conditions.go:123] node cpu capacity is 2
	I1027 18:58:35.108424  268639 node_conditions.go:105] duration metric: took 4.16068ms to run NodePressure ...
	I1027 18:58:35.108450  268639 start.go:241] waiting for startup goroutines ...
	I1027 18:58:35.121110  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:35.129418  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:35.237515  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:35.503236  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:35.622730  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:35.623002  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:35.739993  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:36.006327  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:36.123166  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:36.125423  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:36.244853  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:36.503443  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:36.622746  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:36.622806  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:36.737284  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:37.003859  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:37.122776  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:37.123983  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:37.231121  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:58:37.241340  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:37.503270  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:37.621177  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:37.626124  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:37.737635  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:38.003855  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:38.123088  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:38.123511  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:38.236766  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:38.317937  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.086776419s)
	W1027 18:58:38.317975  268639 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:38.317993  268639 retry.go:31] will retry after 9.298329959s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:38.503167  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:38.620406  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:38.622229  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:38.737515  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:39.002648  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:39.121039  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:39.123512  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:39.238024  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:39.503347  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:39.620720  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:39.622522  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:39.736705  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:40.003612  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:40.125022  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:40.125385  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:40.238094  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:40.503299  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:40.621016  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:40.622634  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:40.736735  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:41.003277  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:41.123604  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:41.123818  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:41.236827  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:41.503630  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:41.622205  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:41.623922  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:41.737204  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:42.003014  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:42.130456  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:42.132282  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:42.242949  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:42.503682  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:42.620634  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:42.623627  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:42.736899  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:43.004229  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:43.121956  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:43.123075  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:43.237353  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:43.502499  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:43.620910  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:43.622975  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:43.737022  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:44.003612  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:44.121432  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:44.124104  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:44.237218  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:44.503319  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:44.621241  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:44.624174  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:44.737685  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:45.003950  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:45.126736  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:45.127218  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:45.243957  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:45.504907  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:45.622408  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:45.624111  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:45.737371  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:46.002917  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:46.121886  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:46.124465  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:46.261991  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:46.503723  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:46.622486  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:46.624234  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:46.737283  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:47.003384  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:47.124006  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:47.124140  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:47.237561  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:47.504007  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:47.617377  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:58:47.629430  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:47.629561  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:47.743057  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:48.003548  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:48.124261  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:48.124962  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:48.237208  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:48.506034  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:48.624833  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:48.625573  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:48.738521  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:49.012149  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:49.014283  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.396858118s)
	W1027 18:58:49.014325  268639 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:49.014356  268639 retry.go:31] will retry after 21.94994504s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:49.122338  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:49.124691  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:49.236766  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:49.503178  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:49.621699  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:49.624252  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:49.738065  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:50.004211  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:50.123790  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:50.124077  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:50.237194  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:50.503732  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:50.621858  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:50.624515  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:50.737766  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:51.003575  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:51.121277  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:51.123710  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:51.237365  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:51.503392  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:51.621652  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:51.623833  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:51.736972  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:52.003322  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:52.123170  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:52.123552  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:52.240232  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:52.503879  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:52.621215  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:52.624432  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:52.738161  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:53.003756  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:53.121521  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:53.123016  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:53.237018  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:53.503796  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:53.622174  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:53.623213  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:53.737557  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:54.002551  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:54.120773  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:54.122647  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:54.236563  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:54.503417  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:54.623848  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:54.624134  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:54.737363  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:55.006046  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:55.123764  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:55.125922  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:55.237293  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:55.503530  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:55.621094  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:55.622665  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:55.736330  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:56.002512  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:56.121453  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:56.122901  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:56.241371  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:56.503372  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:56.620387  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:56.622526  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:56.737313  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:57.002956  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:57.121111  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:57.123387  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:57.237443  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:57.503516  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:57.620770  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:57.623222  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:57.737281  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:58.002649  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:58.121233  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:58.123226  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:58.237403  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:58.503576  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:58.622889  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:58.623084  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:58.737449  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:59.005613  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:59.123113  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:59.123205  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:59.238384  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:59.502631  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:59.620316  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:59.622490  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:59.737346  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:00.003517  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:00.126218  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:00.143522  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:00.239448  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:00.504517  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:00.626883  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:00.627080  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:00.737151  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:01.018332  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:01.121933  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:01.123075  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:01.238285  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:01.502439  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:01.621585  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:01.622323  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:01.737577  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:02.009982  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:02.123292  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:02.124010  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:02.237673  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:02.503413  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:02.620551  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:02.622656  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:02.736827  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:03.010105  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:03.121416  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:03.121774  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:03.236874  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:03.504683  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:03.621404  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:03.623119  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:03.737320  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:04.003685  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:04.126370  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:04.126604  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:04.238384  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:04.503155  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:04.620629  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:04.623026  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:04.737190  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:05.003651  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:05.132803  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:05.133503  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:05.238182  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:05.502334  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:05.620618  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:05.622523  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:05.736604  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:06.003403  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:06.120645  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:06.124115  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:06.240709  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:06.504024  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:06.623214  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:06.623529  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:06.737625  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:07.003602  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:07.121158  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:07.123763  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:07.236974  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:07.503457  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:07.621848  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:07.623617  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:07.736759  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:08.003838  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:08.121431  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:08.123991  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:08.237069  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:08.503649  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:08.621764  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:08.623456  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:08.737621  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:09.002976  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:09.122662  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:09.123816  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:09.236926  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:09.503348  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:09.621489  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:09.622876  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:09.736854  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:10.004153  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:10.123825  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:10.124226  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:10.237463  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:10.503282  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:10.621629  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:10.631081  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:10.737496  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:10.964755  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:59:11.009371  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:11.124476  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:11.124730  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:11.237008  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:11.503550  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:11.623382  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:11.623775  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:11.737117  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:12.003270  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:12.121786  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:12.124663  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:12.161451  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.196660594s)
	W1027 18:59:12.161532  268639 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:59:12.161569  268639 retry.go:31] will retry after 25.286914289s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:59:12.237638  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:12.505610  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:12.621047  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:12.623573  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:12.737789  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:13.009153  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:13.122470  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:13.124083  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:13.237532  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:13.502802  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:13.622917  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:13.625258  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:13.736990  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:14.003554  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:14.121116  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:14.123360  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:14.237328  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:14.502614  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:14.621540  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:14.623963  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:14.736967  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:15.004451  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:15.121731  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:15.123677  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:15.236991  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:15.504865  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:15.620900  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:15.622697  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:15.737035  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:16.003781  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:16.121554  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:16.124277  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:16.243612  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:16.504072  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:16.623667  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:16.623892  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:16.737769  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:17.003667  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:17.121107  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:17.122916  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:17.255640  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:17.504064  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:17.622939  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:17.624170  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:17.737300  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:18.002818  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:18.120967  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:18.123406  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:18.247849  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:18.503272  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:18.620545  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:18.622217  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:18.737137  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:19.002373  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:19.121395  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:19.122641  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:19.241526  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:19.502663  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:19.621471  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:19.622373  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:19.737915  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:20.003606  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:20.123253  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:20.124110  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:20.237779  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:20.504332  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:20.623494  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:20.623974  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:20.737036  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:21.003683  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:21.128329  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:21.129155  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:21.236824  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:21.503641  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:21.620665  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:21.623299  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:21.737168  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:22.004411  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:22.121005  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:22.123908  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:59:22.237477  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:22.503807  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:22.621459  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:22.623935  268639 kapi.go:107] duration metric: took 1m26.50452032s to wait for kubernetes.io/minikube-addons=registry ...
	I1027 18:59:22.737154  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:23.003005  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:23.121733  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:23.236944  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:23.504135  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:23.622177  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:23.737221  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:24.002851  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:24.121908  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:24.237505  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:24.503568  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:24.621187  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:24.736997  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:25.004293  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:25.121918  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:25.237118  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:25.502927  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:25.622070  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:25.737210  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:26.002327  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:26.120704  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:26.239004  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:26.503611  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:26.624165  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:26.737310  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:27.085742  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:27.128727  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:27.238432  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:27.502237  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:27.621166  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:27.737851  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:28.003387  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:28.120812  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:28.236605  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:28.503442  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:28.621334  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:28.737447  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:29.004017  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:29.121128  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:29.237136  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:59:29.510363  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:29.624337  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:29.737349  268639 kapi.go:107] duration metric: took 1m30.003707646s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1027 18:59:29.740733  268639 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-101592 cluster.
	I1027 18:59:29.743684  268639 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1027 18:59:29.746612  268639 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1027 18:59:30.002824  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:30.121346  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:30.509877  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:30.621694  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:31.003698  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:31.121365  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:31.504136  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:31.621211  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:32.003679  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:32.120764  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:32.503268  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:32.622445  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:33.003347  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:33.121470  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:33.504211  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:33.620523  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:34.004366  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:34.121315  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:34.503605  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:34.620538  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:35.002359  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:35.120651  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:35.504966  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:35.626680  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:36.003745  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:36.121388  268639 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:59:36.504698  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:36.621016  268639 kapi.go:107] duration metric: took 1m40.503694022s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1027 18:59:37.099882  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:37.449185  268639 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:59:37.503345  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:38.002968  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:38.503799  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:38.510392  268639 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.061167706s)
	W1027 18:59:38.510451  268639 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1027 18:59:38.510529  268639 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1027 18:59:39.003834  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:39.503594  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:40.005297  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:40.506114  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:41.003798  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:41.504217  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:42.002961  268639 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:59:42.502779  268639 kapi.go:107] duration metric: took 1m46.003408719s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1027 18:59:42.505928  268639 out.go:179] * Enabled addons: registry-creds, amd-gpu-device-plugin, ingress-dns, cloud-spanner, nvidia-device-plugin, storage-provisioner, metrics-server, yakd, default-storageclass, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1027 18:59:42.508700  268639 addons.go:514] duration metric: took 1m52.656495526s for enable addons: enabled=[registry-creds amd-gpu-device-plugin ingress-dns cloud-spanner nvidia-device-plugin storage-provisioner metrics-server yakd default-storageclass volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1027 18:59:42.508755  268639 start.go:246] waiting for cluster config update ...
	I1027 18:59:42.508775  268639 start.go:255] writing updated cluster config ...
	I1027 18:59:42.509073  268639 ssh_runner.go:195] Run: rm -f paused
	I1027 18:59:42.512552  268639 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 18:59:42.516796  268639 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kbgn5" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:59:42.524196  268639 pod_ready.go:94] pod "coredns-66bc5c9577-kbgn5" is "Ready"
	I1027 18:59:42.524226  268639 pod_ready.go:86] duration metric: took 7.398357ms for pod "coredns-66bc5c9577-kbgn5" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:59:42.526848  268639 pod_ready.go:83] waiting for pod "etcd-addons-101592" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:59:42.532751  268639 pod_ready.go:94] pod "etcd-addons-101592" is "Ready"
	I1027 18:59:42.532779  268639 pod_ready.go:86] duration metric: took 5.906067ms for pod "etcd-addons-101592" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:59:42.535544  268639 pod_ready.go:83] waiting for pod "kube-apiserver-addons-101592" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:59:42.539992  268639 pod_ready.go:94] pod "kube-apiserver-addons-101592" is "Ready"
	I1027 18:59:42.540067  268639 pod_ready.go:86] duration metric: took 4.493744ms for pod "kube-apiserver-addons-101592" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:59:42.542553  268639 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-101592" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:59:42.917668  268639 pod_ready.go:94] pod "kube-controller-manager-addons-101592" is "Ready"
	I1027 18:59:42.917699  268639 pod_ready.go:86] duration metric: took 375.12387ms for pod "kube-controller-manager-addons-101592" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:59:43.117876  268639 pod_ready.go:83] waiting for pod "kube-proxy-k9g92" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:59:43.517231  268639 pod_ready.go:94] pod "kube-proxy-k9g92" is "Ready"
	I1027 18:59:43.517263  268639 pod_ready.go:86] duration metric: took 399.358384ms for pod "kube-proxy-k9g92" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:59:43.717667  268639 pod_ready.go:83] waiting for pod "kube-scheduler-addons-101592" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:59:44.117783  268639 pod_ready.go:94] pod "kube-scheduler-addons-101592" is "Ready"
	I1027 18:59:44.117812  268639 pod_ready.go:86] duration metric: took 400.115962ms for pod "kube-scheduler-addons-101592" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:59:44.117826  268639 pod_ready.go:40] duration metric: took 1.605245188s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 18:59:44.168647  268639 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 18:59:44.171880  268639 out.go:179] * Done! kubectl is now configured to use "addons-101592" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 27 18:59:44 addons-101592 crio[831]: time="2025-10-27T18:59:44.447275346Z" level=info msg="Stopped pod sandbox (already stopped): d7dcb5354f059c8851a9c31f563ade53fdce46ad9cc30443ff573e7adb822d10" id=711173b2-8e1b-4e14-aa48-d6242cc1c39e name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 27 18:59:44 addons-101592 crio[831]: time="2025-10-27T18:59:44.447703307Z" level=info msg="Removing pod sandbox: d7dcb5354f059c8851a9c31f563ade53fdce46ad9cc30443ff573e7adb822d10" id=ec3a2f09-fc46-491d-b397-abcbb6ff586b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 27 18:59:44 addons-101592 crio[831]: time="2025-10-27T18:59:44.459975411Z" level=info msg="Removed pod sandbox: d7dcb5354f059c8851a9c31f563ade53fdce46ad9cc30443ff573e7adb822d10" id=ec3a2f09-fc46-491d-b397-abcbb6ff586b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 27 18:59:45 addons-101592 crio[831]: time="2025-10-27T18:59:45.252692568Z" level=info msg="Running pod sandbox: default/busybox/POD" id=57fcb041-44c0-4539-ae96-0ee39ab541a4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 18:59:45 addons-101592 crio[831]: time="2025-10-27T18:59:45.252900916Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 18:59:45 addons-101592 crio[831]: time="2025-10-27T18:59:45.273473746Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:5db5ba7be1965f803e3f730a73eb39a731df03849c13d55f92474630f85b70bf UID:e1722866-51ec-40b7-b940-c96c0602e88b NetNS:/var/run/netns/407e131f-3d46-42a3-8e2c-fa46ead6926d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012dc58}] Aliases:map[]}"
	Oct 27 18:59:45 addons-101592 crio[831]: time="2025-10-27T18:59:45.273674997Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 27 18:59:45 addons-101592 crio[831]: time="2025-10-27T18:59:45.285336456Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:5db5ba7be1965f803e3f730a73eb39a731df03849c13d55f92474630f85b70bf UID:e1722866-51ec-40b7-b940-c96c0602e88b NetNS:/var/run/netns/407e131f-3d46-42a3-8e2c-fa46ead6926d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012dc58}] Aliases:map[]}"
	Oct 27 18:59:45 addons-101592 crio[831]: time="2025-10-27T18:59:45.285546929Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 27 18:59:45 addons-101592 crio[831]: time="2025-10-27T18:59:45.305986282Z" level=info msg="Ran pod sandbox 5db5ba7be1965f803e3f730a73eb39a731df03849c13d55f92474630f85b70bf with infra container: default/busybox/POD" id=57fcb041-44c0-4539-ae96-0ee39ab541a4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 18:59:45 addons-101592 crio[831]: time="2025-10-27T18:59:45.318063789Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=94d02fcf-48ae-4e79-be7c-abccbe26ec17 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 18:59:45 addons-101592 crio[831]: time="2025-10-27T18:59:45.318360004Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=94d02fcf-48ae-4e79-be7c-abccbe26ec17 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 18:59:45 addons-101592 crio[831]: time="2025-10-27T18:59:45.318464781Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=94d02fcf-48ae-4e79-be7c-abccbe26ec17 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 18:59:45 addons-101592 crio[831]: time="2025-10-27T18:59:45.323720304Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9cd327eb-4629-47eb-970d-308ab7acd6ae name=/runtime.v1.ImageService/PullImage
	Oct 27 18:59:45 addons-101592 crio[831]: time="2025-10-27T18:59:45.332294755Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 27 18:59:47 addons-101592 crio[831]: time="2025-10-27T18:59:47.352409265Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=9cd327eb-4629-47eb-970d-308ab7acd6ae name=/runtime.v1.ImageService/PullImage
	Oct 27 18:59:47 addons-101592 crio[831]: time="2025-10-27T18:59:47.353619222Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=403eb0f7-3413-459c-b6ad-45d512216fad name=/runtime.v1.ImageService/ImageStatus
	Oct 27 18:59:47 addons-101592 crio[831]: time="2025-10-27T18:59:47.357106484Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=82331170-3269-4da3-a5a5-9add02fddde7 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 18:59:47 addons-101592 crio[831]: time="2025-10-27T18:59:47.365183413Z" level=info msg="Creating container: default/busybox/busybox" id=abe02fc0-3fb3-4667-87a1-840bae93a83e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 18:59:47 addons-101592 crio[831]: time="2025-10-27T18:59:47.365321264Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 18:59:47 addons-101592 crio[831]: time="2025-10-27T18:59:47.372443573Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 18:59:47 addons-101592 crio[831]: time="2025-10-27T18:59:47.372932423Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 18:59:47 addons-101592 crio[831]: time="2025-10-27T18:59:47.388417587Z" level=info msg="Created container 0f6f9956373947772112071d471a3e515c6eb978477ffc89e363a0b1db2dd6a4: default/busybox/busybox" id=abe02fc0-3fb3-4667-87a1-840bae93a83e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 18:59:47 addons-101592 crio[831]: time="2025-10-27T18:59:47.390875654Z" level=info msg="Starting container: 0f6f9956373947772112071d471a3e515c6eb978477ffc89e363a0b1db2dd6a4" id=c768a729-9827-47d3-9378-2bc3bf79113c name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 18:59:47 addons-101592 crio[831]: time="2025-10-27T18:59:47.392997976Z" level=info msg="Started container" PID=5054 containerID=0f6f9956373947772112071d471a3e515c6eb978477ffc89e363a0b1db2dd6a4 description=default/busybox/busybox id=c768a729-9827-47d3-9378-2bc3bf79113c name=/runtime.v1.RuntimeService/StartContainer sandboxID=5db5ba7be1965f803e3f730a73eb39a731df03849c13d55f92474630f85b70bf
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	0f6f995637394       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          9 seconds ago        Running             busybox                                  0                   5db5ba7be1965       busybox                                     default
	3e163786b302c       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          15 seconds ago       Running             csi-snapshotter                          0                   6cf931804122e       csi-hostpathplugin-42bzh                    kube-system
	838eb4978f205       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          16 seconds ago       Running             csi-provisioner                          0                   6cf931804122e       csi-hostpathplugin-42bzh                    kube-system
	d6943420da6a4       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            18 seconds ago       Running             liveness-probe                           0                   6cf931804122e       csi-hostpathplugin-42bzh                    kube-system
	7e2b5aeafd005       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           19 seconds ago       Running             hostpath                                 0                   6cf931804122e       csi-hostpathplugin-42bzh                    kube-system
	2aa2da6d8f06b       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                20 seconds ago       Running             node-driver-registrar                    0                   6cf931804122e       csi-hostpathplugin-42bzh                    kube-system
	0a9000dd55e0a       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             21 seconds ago       Running             controller                               0                   79cefa06360e1       ingress-nginx-controller-675c5ddd98-ql9nw   ingress-nginx
	7adff4fa7265c       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 28 seconds ago       Running             gcp-auth                                 0                   8dbf1515b6d55       gcp-auth-78565c9fb4-vrdnc                   gcp-auth
	c0bc4ccc46eff       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            31 seconds ago       Running             gadget                                   0                   618425097f045       gadget-647wx                                gadget
	50e0b6b85b8a7       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              35 seconds ago       Running             registry-proxy                           0                   4cdae724316c7       registry-proxy-k87sb                        kube-system
	cd4e827788ce9       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   39 seconds ago       Running             csi-external-health-monitor-controller   0                   6cf931804122e       csi-hostpathplugin-42bzh                    kube-system
	7f055e0b328c7       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     40 seconds ago       Running             nvidia-device-plugin-ctr                 0                   934eac4cc468b       nvidia-device-plugin-daemonset-sghjb        kube-system
	898e5fee8fdcf       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             40 seconds ago       Exited              patch                                    2                   0a4e7fbf2d470       ingress-nginx-admission-patch-6wkkl         ingress-nginx
	9f34bd9c0c18a       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             40 seconds ago       Exited              patch                                    2                   bb7f57706ff40       gcp-auth-certs-patch-hfjgp                  gcp-auth
	7e472e3b179a0       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              54 seconds ago       Running             csi-resizer                              0                   e505c927ba76d       csi-hostpath-resizer-0                      kube-system
	3043062bbd230       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   55 seconds ago       Exited              create                                   0                   00e18a96f48c5       ingress-nginx-admission-create-6hkms        ingress-nginx
	a9d1dbc41feea       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           56 seconds ago       Running             registry                                 0                   b5e278c9e2a43       registry-6b586f9694-jvgtv                   kube-system
	bfa31a6efbb78       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      59 seconds ago       Running             volume-snapshot-controller               0                   2528bbcc805a3       snapshot-controller-7d9fbc56b8-pqsgt        kube-system
	8c0b8c2d5a795       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             59 seconds ago       Running             local-path-provisioner                   0                   7cf5a05b8948a       local-path-provisioner-648f6765c9-jcsl9     local-path-storage
	458122abc2a23       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   c9c757080912c       snapshot-controller-7d9fbc56b8-pvz49        kube-system
	2d95c80ef718c       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   02e3dbfc1d399       csi-hostpath-attacher-0                     kube-system
	820e580b2ebe2       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               About a minute ago   Running             cloud-spanner-emulator                   0                   46f7814a88ada       cloud-spanner-emulator-86bd5cbb97-zkplg     default
	a33d881a2daa0       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   b0ebf26d8430e       kube-ingress-dns-minikube                   kube-system
	8753d9b0a7cb8       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   12c44a9aa9966       yakd-dashboard-5ff678cb9-lhv5m              yakd-dashboard
	8ebcbe2c9975f       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   b7aeb407c31fc       metrics-server-85b7d694d7-mmqw2             kube-system
	fe0ea9b1d2cf2       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   848b0a7a17c8a       coredns-66bc5c9577-kbgn5                    kube-system
	f81e4711fbc01       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   9f36c68eeea05       storage-provisioner                         kube-system
	28587c37519da       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   d16d0b93a413f       kube-proxy-k9g92                            kube-system
	1e50342291985       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   c1995e14af0fe       kindnet-87t7g                               kube-system
	d5efbcd6024e7       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   ab64199fdc6fa       etcd-addons-101592                          kube-system
	4a399d9b30f5e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   e76ab52a5758c       kube-controller-manager-addons-101592       kube-system
	752e65ab367c9       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   1b88e16ed88a7       kube-scheduler-addons-101592                kube-system
	02d35f7174bb3       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   5719b22fd6403       kube-apiserver-addons-101592                kube-system
	
	
	==> coredns [fe0ea9b1d2cf2cbefbf8c8c1c5b40f8d072efd2d489b534d519c969d9f5078d6] <==
	[INFO] 10.244.0.16:48505 - 44344 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000077561s
	[INFO] 10.244.0.16:48505 - 56078 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002269378s
	[INFO] 10.244.0.16:48505 - 10738 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002079392s
	[INFO] 10.244.0.16:48505 - 50748 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000148534s
	[INFO] 10.244.0.16:48505 - 11375 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000146064s
	[INFO] 10.244.0.16:56548 - 9302 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000151258s
	[INFO] 10.244.0.16:56548 - 9097 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000069332s
	[INFO] 10.244.0.16:52312 - 29241 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000087235s
	[INFO] 10.244.0.16:52312 - 29069 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000066345s
	[INFO] 10.244.0.16:34698 - 7151 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00007601s
	[INFO] 10.244.0.16:34698 - 6707 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000079449s
	[INFO] 10.244.0.16:42050 - 39527 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001439383s
	[INFO] 10.244.0.16:42050 - 39346 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001415663s
	[INFO] 10.244.0.16:45965 - 16759 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000125823s
	[INFO] 10.244.0.16:45965 - 16595 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000130195s
	[INFO] 10.244.0.21:48819 - 47143 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000151865s
	[INFO] 10.244.0.21:44287 - 20735 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000140001s
	[INFO] 10.244.0.21:59920 - 29863 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00010682s
	[INFO] 10.244.0.21:51683 - 23029 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000074624s
	[INFO] 10.244.0.21:48016 - 44962 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000084347s
	[INFO] 10.244.0.21:52341 - 27557 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000095276s
	[INFO] 10.244.0.21:50552 - 29982 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001822464s
	[INFO] 10.244.0.21:55432 - 40925 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001681798s
	[INFO] 10.244.0.21:60096 - 34132 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.000559889s
	[INFO] 10.244.0.21:42923 - 23822 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001679706s
	
	
	==> describe nodes <==
	Name:               addons-101592
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-101592
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=addons-101592
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T18_57_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-101592
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-101592"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 18:57:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-101592
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 18:59:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 18:59:48 +0000   Mon, 27 Oct 2025 18:57:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 18:59:48 +0000   Mon, 27 Oct 2025 18:57:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 18:59:48 +0000   Mon, 27 Oct 2025 18:57:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 18:59:48 +0000   Mon, 27 Oct 2025 18:58:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-101592
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                e04f1509-9fda-4a47-ab13-403e07d0fc28
	  Boot ID:                    23ea05b4-8203-4fb9-a84a-5deb4b091cbb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  default                     cloud-spanner-emulator-86bd5cbb97-zkplg      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  gadget                      gadget-647wx                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  gcp-auth                    gcp-auth-78565c9fb4-vrdnc                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-ql9nw    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m1s
	  kube-system                 coredns-66bc5c9577-kbgn5                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m7s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 csi-hostpathplugin-42bzh                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 etcd-addons-101592                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m13s
	  kube-system                 kindnet-87t7g                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m8s
	  kube-system                 kube-apiserver-addons-101592                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 kube-controller-manager-addons-101592        200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-proxy-k9g92                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 kube-scheduler-addons-101592                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 metrics-server-85b7d694d7-mmqw2              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m3s
	  kube-system                 nvidia-device-plugin-daemonset-sghjb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 registry-6b586f9694-jvgtv                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 registry-creds-764b6fb674-fz96k              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 registry-proxy-k87sb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 snapshot-controller-7d9fbc56b8-pqsgt         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 snapshot-controller-7d9fbc56b8-pvz49         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  local-path-storage          local-path-provisioner-648f6765c9-jcsl9      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-lhv5m               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m5s                   kube-proxy       
	  Warning  CgroupV1                 2m20s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m20s (x8 over 2m20s)  kubelet          Node addons-101592 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m20s (x8 over 2m20s)  kubelet          Node addons-101592 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m20s (x8 over 2m20s)  kubelet          Node addons-101592 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m13s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m13s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m13s                  kubelet          Node addons-101592 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m13s                  kubelet          Node addons-101592 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m13s                  kubelet          Node addons-101592 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m9s                   node-controller  Node addons-101592 event: Registered Node addons-101592 in Controller
	  Normal   NodeReady                86s                    kubelet          Node addons-101592 status is now: NodeReady
	
	
	==> dmesg <==
	[ +30.305925] overlayfs: idmapped layers are currently not supported
	[Oct27 18:28] overlayfs: idmapped layers are currently not supported
	[Oct27 18:29] overlayfs: idmapped layers are currently not supported
	[Oct27 18:30] overlayfs: idmapped layers are currently not supported
	[ +18.215952] overlayfs: idmapped layers are currently not supported
	[Oct27 18:31] overlayfs: idmapped layers are currently not supported
	[ +35.797174] overlayfs: idmapped layers are currently not supported
	[Oct27 18:32] overlayfs: idmapped layers are currently not supported
	[Oct27 18:34] overlayfs: idmapped layers are currently not supported
	[ +38.178588] overlayfs: idmapped layers are currently not supported
	[Oct27 18:36] overlayfs: idmapped layers are currently not supported
	[ +29.649930] overlayfs: idmapped layers are currently not supported
	[Oct27 18:37] overlayfs: idmapped layers are currently not supported
	[Oct27 18:38] overlayfs: idmapped layers are currently not supported
	[ +26.025304] overlayfs: idmapped layers are currently not supported
	[Oct27 18:39] overlayfs: idmapped layers are currently not supported
	[  +8.720024] overlayfs: idmapped layers are currently not supported
	[Oct27 18:40] overlayfs: idmapped layers are currently not supported
	[Oct27 18:41] overlayfs: idmapped layers are currently not supported
	[Oct27 18:42] overlayfs: idmapped layers are currently not supported
	[Oct27 18:43] overlayfs: idmapped layers are currently not supported
	[Oct27 18:44] overlayfs: idmapped layers are currently not supported
	[ +50.528384] overlayfs: idmapped layers are currently not supported
	[Oct27 18:56] kauditd_printk_skb: 8 callbacks suppressed
	[Oct27 18:57] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d5efbcd6024e7af2a933432a33e2b4197973d93eab2d9c966cb6e488d871f23b] <==
	{"level":"warn","ts":"2025-10-27T18:57:40.851857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:40.867178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:40.882401Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:40.899148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:40.915129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:40.928875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:40.942170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:40.957470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:40.983943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:40.987297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:41.006233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:41.021069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:41.035166Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:41.050057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:41.064763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:41.094021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:41.107709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:41.121567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:41.184265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:56.811289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:56.824913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:58:18.905554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:58:18.925334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:58:18.934222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:58:18.949147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49202","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [7adff4fa7265c0dee6e25f1e0e66d068a1947079c2f3e24230ee78e5700557ba] <==
	2025/10/27 18:59:28 GCP Auth Webhook started!
	2025/10/27 18:59:44 Ready to marshal response ...
	2025/10/27 18:59:44 Ready to write response ...
	2025/10/27 18:59:44 Ready to marshal response ...
	2025/10/27 18:59:44 Ready to write response ...
	2025/10/27 18:59:45 Ready to marshal response ...
	2025/10/27 18:59:45 Ready to write response ...
	
	
	==> kernel <==
	 18:59:57 up  1:42,  0 user,  load average: 2.09, 1.36, 1.91
	Linux addons-101592 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1e5034229198563b54bd8e5f173e0492eee9301465992e61e3c894cb36e53dd6] <==
	E1027 18:58:21.349473       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1027 18:58:21.355034       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1027 18:58:21.355170       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1027 18:58:21.356236       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1027 18:58:22.855643       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 18:58:22.855667       1 metrics.go:72] Registering metrics
	I1027 18:58:22.855720       1 controller.go:711] "Syncing nftables rules"
	I1027 18:58:31.356416       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 18:58:31.356483       1 main.go:301] handling current node
	I1027 18:58:41.351119       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 18:58:41.351185       1 main.go:301] handling current node
	I1027 18:58:51.348947       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 18:58:51.349019       1 main.go:301] handling current node
	I1027 18:59:01.349631       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 18:59:01.349662       1 main.go:301] handling current node
	I1027 18:59:11.355065       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 18:59:11.355100       1 main.go:301] handling current node
	I1027 18:59:21.350628       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 18:59:21.350662       1 main.go:301] handling current node
	I1027 18:59:31.349945       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 18:59:31.350034       1 main.go:301] handling current node
	I1027 18:59:41.350445       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 18:59:41.350482       1 main.go:301] handling current node
	I1027 18:59:51.350008       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 18:59:51.350055       1 main.go:301] handling current node
	
	
	==> kube-apiserver [02d35f7174bb35a983c70121d66fa26a8b0d33acfa279db1bc1b3d623b4f8f09] <==
	W1027 18:58:18.893962       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1027 18:58:18.912924       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1027 18:58:18.933743       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1027 18:58:18.948318       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1027 18:58:31.854943       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.118.75:443: connect: connection refused
	E1027 18:58:31.855894       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.118.75:443: connect: connection refused" logger="UnhandledError"
	W1027 18:58:31.856425       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.118.75:443: connect: connection refused
	E1027 18:58:31.856450       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.118.75:443: connect: connection refused" logger="UnhandledError"
	W1027 18:58:31.944941       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.118.75:443: connect: connection refused
	E1027 18:58:31.944986       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.118.75:443: connect: connection refused" logger="UnhandledError"
	W1027 18:58:36.862485       1 handler_proxy.go:99] no RequestInfo found in the context
	E1027 18:58:36.862537       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.239.58:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.239.58:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.239.58:443: connect: connection refused" logger="UnhandledError"
	E1027 18:58:36.862568       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1027 18:58:36.863211       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.239.58:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.239.58:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.239.58:443: connect: connection refused" logger="UnhandledError"
	E1027 18:58:36.868446       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.239.58:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.239.58:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.239.58:443: connect: connection refused" logger="UnhandledError"
	E1027 18:58:36.889508       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.239.58:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.239.58:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.239.58:443: connect: connection refused" logger="UnhandledError"
	E1027 18:58:36.931111       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.239.58:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.239.58:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.239.58:443: connect: connection refused" logger="UnhandledError"
	E1027 18:58:37.012884       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.239.58:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.239.58:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.239.58:443: connect: connection refused" logger="UnhandledError"
	I1027 18:58:37.277160       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1027 18:59:55.188817       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:55982: use of closed network connection
	E1027 18:59:55.463221       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:56018: use of closed network connection
	E1027 18:59:55.599585       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:56028: use of closed network connection
	
	
	==> kube-controller-manager [4a399d9b30f5e68ec2fe6ac2e4b287b9821a6d963956dc63355919c90bbdbff6] <==
	I1027 18:57:48.924419       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 18:57:48.924657       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1027 18:57:48.924747       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1027 18:57:48.924830       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1027 18:57:48.924861       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1027 18:57:48.924908       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1027 18:57:48.925261       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 18:57:48.927038       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 18:57:48.927246       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 18:57:48.927343       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 18:57:48.930079       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1027 18:57:48.932550       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1027 18:57:48.935090       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 18:57:48.937517       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-101592" podCIDRs=["10.244.0.0/24"]
	I1027 18:57:48.937686       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1027 18:57:48.939069       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	E1027 18:57:54.924820       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I1027 18:58:18.880374       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1027 18:58:18.884490       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	E1027 18:58:18.939326       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1027 18:58:18.939477       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1027 18:58:18.939526       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1027 18:58:18.939553       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 18:58:18.985583       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 18:58:33.879560       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [28587c37519da029d28ad619eccddb0f9cf6be1a53a5a3ca6648894a187e90d2] <==
	I1027 18:57:51.414811       1 server_linux.go:53] "Using iptables proxy"
	I1027 18:57:51.500494       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 18:57:51.600822       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 18:57:51.600869       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1027 18:57:51.600958       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 18:57:51.666090       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 18:57:51.666149       1 server_linux.go:132] "Using iptables Proxier"
	I1027 18:57:51.685554       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 18:57:51.685865       1 server.go:527] "Version info" version="v1.34.1"
	I1027 18:57:51.685883       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 18:57:51.687428       1 config.go:200] "Starting service config controller"
	I1027 18:57:51.687440       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 18:57:51.687467       1 config.go:106] "Starting endpoint slice config controller"
	I1027 18:57:51.687471       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 18:57:51.687482       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 18:57:51.687486       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 18:57:51.688093       1 config.go:309] "Starting node config controller"
	I1027 18:57:51.688100       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 18:57:51.688105       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 18:57:51.787833       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 18:57:51.787878       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 18:57:51.787913       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [752e65ab367c93dbf18c03c14f825896a94d2b17a2a40668c50b5961b7e50972] <==
	E1027 18:57:41.959476       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 18:57:41.959578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 18:57:41.959703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 18:57:41.959827       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 18:57:41.959937       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1027 18:57:41.960052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 18:57:41.960153       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 18:57:41.960249       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 18:57:41.960358       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 18:57:41.960481       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 18:57:41.960584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 18:57:41.960686       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 18:57:41.961663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1027 18:57:41.961878       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 18:57:41.962029       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 18:57:41.962261       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 18:57:41.962330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 18:57:42.882614       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1027 18:57:42.895248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 18:57:42.918743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 18:57:42.984091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 18:57:43.028782       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 18:57:43.047194       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 18:57:43.077088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1027 18:57:44.688371       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 18:59:18 addons-101592 kubelet[1297]: I1027 18:59:18.414833    1297 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2aeb1f1-e8f0-4f53-80e8-6b5bd30857b1-kube-api-access-zt4jt" (OuterVolumeSpecName: "kube-api-access-zt4jt") pod "f2aeb1f1-e8f0-4f53-80e8-6b5bd30857b1" (UID: "f2aeb1f1-e8f0-4f53-80e8-6b5bd30857b1"). InnerVolumeSpecName "kube-api-access-zt4jt". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 27 18:59:18 addons-101592 kubelet[1297]: I1027 18:59:18.415105    1297 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57d79cbe-e8a7-4268-b65f-b4349871124e-kube-api-access-g9gcf" (OuterVolumeSpecName: "kube-api-access-g9gcf") pod "57d79cbe-e8a7-4268-b65f-b4349871124e" (UID: "57d79cbe-e8a7-4268-b65f-b4349871124e"). InnerVolumeSpecName "kube-api-access-g9gcf". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 27 18:59:18 addons-101592 kubelet[1297]: I1027 18:59:18.513082    1297 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-g9gcf\" (UniqueName: \"kubernetes.io/projected/57d79cbe-e8a7-4268-b65f-b4349871124e-kube-api-access-g9gcf\") on node \"addons-101592\" DevicePath \"\""
	Oct 27 18:59:18 addons-101592 kubelet[1297]: I1027 18:59:18.513131    1297 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zt4jt\" (UniqueName: \"kubernetes.io/projected/f2aeb1f1-e8f0-4f53-80e8-6b5bd30857b1-kube-api-access-zt4jt\") on node \"addons-101592\" DevicePath \"\""
	Oct 27 18:59:19 addons-101592 kubelet[1297]: I1027 18:59:19.226626    1297 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb7f57706ff40a29629ee4d4394ece501c21661364c386d01d2f862cbb9688b3"
	Oct 27 18:59:19 addons-101592 kubelet[1297]: I1027 18:59:19.229654    1297 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a4e7fbf2d470dc5e9fb9dd955ca9dee00c6ac00d4d0ec5b623e87d007335151"
	Oct 27 18:59:22 addons-101592 kubelet[1297]: I1027 18:59:22.254393    1297 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-k87sb" secret="" err="secret \"gcp-auth\" not found"
	Oct 27 18:59:22 addons-101592 kubelet[1297]: I1027 18:59:22.274135    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-k87sb" podStartSLOduration=2.750187252 podStartE2EDuration="51.274116038s" podCreationTimestamp="2025-10-27 18:58:31 +0000 UTC" firstStartedPulling="2025-10-27 18:58:32.902302871 +0000 UTC m=+48.630406971" lastFinishedPulling="2025-10-27 18:59:21.426231657 +0000 UTC m=+97.154335757" observedRunningTime="2025-10-27 18:59:22.27317238 +0000 UTC m=+98.001276480" watchObservedRunningTime="2025-10-27 18:59:22.274116038 +0000 UTC m=+98.002220138"
	Oct 27 18:59:23 addons-101592 kubelet[1297]: I1027 18:59:23.257063    1297 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-k87sb" secret="" err="secret \"gcp-auth\" not found"
	Oct 27 18:59:26 addons-101592 kubelet[1297]: I1027 18:59:26.286309    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-647wx" podStartSLOduration=66.155695953 podStartE2EDuration="1m31.286281732s" podCreationTimestamp="2025-10-27 18:57:55 +0000 UTC" firstStartedPulling="2025-10-27 18:59:00.398009094 +0000 UTC m=+76.126113193" lastFinishedPulling="2025-10-27 18:59:25.528594872 +0000 UTC m=+101.256698972" observedRunningTime="2025-10-27 18:59:26.285582516 +0000 UTC m=+102.013686632" watchObservedRunningTime="2025-10-27 18:59:26.286281732 +0000 UTC m=+102.014385832"
	Oct 27 18:59:29 addons-101592 kubelet[1297]: I1027 18:59:29.645845    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-vrdnc" podStartSLOduration=66.270232363 podStartE2EDuration="1m30.645828045s" podCreationTimestamp="2025-10-27 18:57:59 +0000 UTC" firstStartedPulling="2025-10-27 18:59:04.365118507 +0000 UTC m=+80.093222615" lastFinishedPulling="2025-10-27 18:59:28.740714189 +0000 UTC m=+104.468818297" observedRunningTime="2025-10-27 18:59:29.301639763 +0000 UTC m=+105.029743871" watchObservedRunningTime="2025-10-27 18:59:29.645828045 +0000 UTC m=+105.373932145"
	Oct 27 18:59:34 addons-101592 kubelet[1297]: I1027 18:59:34.381826    1297 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5b61436-fdb2-4e09-b303-4ef0400a8dc7" path="/var/lib/kubelet/pods/c5b61436-fdb2-4e09-b303-4ef0400a8dc7/volumes"
	Oct 27 18:59:35 addons-101592 kubelet[1297]: E1027 18:59:35.786723    1297 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 27 18:59:35 addons-101592 kubelet[1297]: E1027 18:59:35.786806    1297 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aca5b0e8-8150-49b6-a3e8-536f93b6d0fe-gcr-creds podName:aca5b0e8-8150-49b6-a3e8-536f93b6d0fe nodeName:}" failed. No retries permitted until 2025-10-27 19:00:39.786787791 +0000 UTC m=+175.514891891 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/aca5b0e8-8150-49b6-a3e8-536f93b6d0fe-gcr-creds") pod "registry-creds-764b6fb674-fz96k" (UID: "aca5b0e8-8150-49b6-a3e8-536f93b6d0fe") : secret "registry-creds-gcr" not found
	Oct 27 18:59:36 addons-101592 kubelet[1297]: I1027 18:59:36.351146    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-ql9nw" podStartSLOduration=69.523655542 podStartE2EDuration="1m40.351129005s" podCreationTimestamp="2025-10-27 18:57:56 +0000 UTC" firstStartedPulling="2025-10-27 18:59:04.460626543 +0000 UTC m=+80.188730643" lastFinishedPulling="2025-10-27 18:59:35.288099998 +0000 UTC m=+111.016204106" observedRunningTime="2025-10-27 18:59:36.350833086 +0000 UTC m=+112.078937202" watchObservedRunningTime="2025-10-27 18:59:36.351129005 +0000 UTC m=+112.079233113"
	Oct 27 18:59:38 addons-101592 kubelet[1297]: I1027 18:59:38.616137    1297 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Oct 27 18:59:38 addons-101592 kubelet[1297]: I1027 18:59:38.616207    1297 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Oct 27 18:59:42 addons-101592 kubelet[1297]: I1027 18:59:42.404307    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-42bzh" podStartSLOduration=2.486635287 podStartE2EDuration="1m11.404287001s" podCreationTimestamp="2025-10-27 18:58:31 +0000 UTC" firstStartedPulling="2025-10-27 18:58:32.875766173 +0000 UTC m=+48.603870273" lastFinishedPulling="2025-10-27 18:59:41.793417888 +0000 UTC m=+117.521521987" observedRunningTime="2025-10-27 18:59:42.397033084 +0000 UTC m=+118.125137364" watchObservedRunningTime="2025-10-27 18:59:42.404287001 +0000 UTC m=+118.132391100"
	Oct 27 18:59:44 addons-101592 kubelet[1297]: I1027 18:59:44.429857    1297 scope.go:117] "RemoveContainer" containerID="a5c8472b85c5e4193e5ded6cdbc320d8064140be41b3a0f2b79a4c1f609abc78"
	Oct 27 18:59:44 addons-101592 kubelet[1297]: E1027 18:59:44.633635    1297 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/198471e23f5930ae007ba32506f9cd7d0e6ab56a3d2cf1962de47c5adfb20786/diff" to get inode usage: stat /var/lib/containers/storage/overlay/198471e23f5930ae007ba32506f9cd7d0e6ab56a3d2cf1962de47c5adfb20786/diff: no such file or directory, extraDiskErr: <nil>
	Oct 27 18:59:45 addons-101592 kubelet[1297]: I1027 18:59:45.074920    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzr5t\" (UniqueName: \"kubernetes.io/projected/e1722866-51ec-40b7-b940-c96c0602e88b-kube-api-access-zzr5t\") pod \"busybox\" (UID: \"e1722866-51ec-40b7-b940-c96c0602e88b\") " pod="default/busybox"
	Oct 27 18:59:45 addons-101592 kubelet[1297]: I1027 18:59:45.075719    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e1722866-51ec-40b7-b940-c96c0602e88b-gcp-creds\") pod \"busybox\" (UID: \"e1722866-51ec-40b7-b940-c96c0602e88b\") " pod="default/busybox"
	Oct 27 18:59:48 addons-101592 kubelet[1297]: I1027 18:59:48.421997    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.386294117 podStartE2EDuration="4.421977055s" podCreationTimestamp="2025-10-27 18:59:44 +0000 UTC" firstStartedPulling="2025-10-27 18:59:45.319028714 +0000 UTC m=+121.047132838" lastFinishedPulling="2025-10-27 18:59:47.354711677 +0000 UTC m=+123.082815776" observedRunningTime="2025-10-27 18:59:48.420012638 +0000 UTC m=+124.148116738" watchObservedRunningTime="2025-10-27 18:59:48.421977055 +0000 UTC m=+124.150081155"
	Oct 27 18:59:50 addons-101592 kubelet[1297]: I1027 18:59:50.381618    1297 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2aeb1f1-e8f0-4f53-80e8-6b5bd30857b1" path="/var/lib/kubelet/pods/f2aeb1f1-e8f0-4f53-80e8-6b5bd30857b1/volumes"
	Oct 27 18:59:55 addons-101592 kubelet[1297]: E1027 18:59:55.600762    1297 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:51788->127.0.0.1:43047: write tcp 127.0.0.1:51788->127.0.0.1:43047: write: broken pipe
	
	
	==> storage-provisioner [f81e4711fbc01c871625a0d7da229dd580feb4c12934b3a38b50966b57a4259b] <==
	W1027 18:59:33.202905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:59:35.206309       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:59:35.211267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:59:37.214268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:59:37.220065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:59:39.223398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:59:39.230079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:59:41.233237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:59:41.238620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:59:43.241685       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:59:43.246414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:59:45.250836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:59:45.264297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:59:47.267335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:59:47.271141       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:59:49.274466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:59:49.280647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:59:51.283660       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:59:51.288772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:59:53.292435       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:59:53.299418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:59:55.302780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:59:55.315654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:59:57.320535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:59:57.333711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-101592 -n addons-101592
helpers_test.go:269: (dbg) Run:  kubectl --context addons-101592 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-6hkms ingress-nginx-admission-patch-6wkkl registry-creds-764b6fb674-fz96k
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-101592 describe pod ingress-nginx-admission-create-6hkms ingress-nginx-admission-patch-6wkkl registry-creds-764b6fb674-fz96k
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-101592 describe pod ingress-nginx-admission-create-6hkms ingress-nginx-admission-patch-6wkkl registry-creds-764b6fb674-fz96k: exit status 1 (88.177684ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-6hkms" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-6wkkl" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-fz96k" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-101592 describe pod ingress-nginx-admission-create-6hkms ingress-nginx-admission-patch-6wkkl registry-creds-764b6fb674-fz96k: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-101592 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-101592 addons disable headlamp --alsologtostderr -v=1: exit status 11 (264.638082ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 18:59:58.697615  275311 out.go:360] Setting OutFile to fd 1 ...
	I1027 18:59:58.698378  275311 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:59:58.698415  275311 out.go:374] Setting ErrFile to fd 2...
	I1027 18:59:58.698439  275311 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:59:58.698794  275311 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 18:59:58.699163  275311 mustload.go:65] Loading cluster: addons-101592
	I1027 18:59:58.699594  275311 config.go:182] Loaded profile config "addons-101592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:59:58.699636  275311 addons.go:606] checking whether the cluster is paused
	I1027 18:59:58.699765  275311 config.go:182] Loaded profile config "addons-101592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:59:58.699807  275311 host.go:66] Checking if "addons-101592" exists ...
	I1027 18:59:58.700343  275311 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 18:59:58.718630  275311 ssh_runner.go:195] Run: systemctl --version
	I1027 18:59:58.718692  275311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 18:59:58.736631  275311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 18:59:58.845522  275311 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 18:59:58.845608  275311 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 18:59:58.880026  275311 cri.go:89] found id: "3e163786b302c708b17fe3ccf72adc340a75e221cd0896244768e1510b5aa88f"
	I1027 18:59:58.880049  275311 cri.go:89] found id: "838eb4978f205b327ad70c9fe726ab47413794195e774fac18ab9ae207723d6a"
	I1027 18:59:58.880055  275311 cri.go:89] found id: "d6943420da6a4a5c627e3e1859fcefbdc0f116ec559255515e3c76875ef303ec"
	I1027 18:59:58.880071  275311 cri.go:89] found id: "7e2b5aeafd0057d1e7d6c15f07b26a4833c3369e948af52f6ce2a927043ec7ce"
	I1027 18:59:58.880092  275311 cri.go:89] found id: "2aa2da6d8f06b0fc78c255affa19eec7d586298271ac54c0800166ccd1369669"
	I1027 18:59:58.880104  275311 cri.go:89] found id: "50e0b6b85b8a72a9fc162999003b05f4316b2f1d308c55874ee95912e0e1662e"
	I1027 18:59:58.880108  275311 cri.go:89] found id: "cd4e827788ce9c8dcbacf66655ce3bbee1fa66b94eb5c687b755fb2b6f5a5cc1"
	I1027 18:59:58.880111  275311 cri.go:89] found id: "7f055e0b328c77feb86cbf8ba19ed9f73334ca7b97c3bc8004db97b398fbfd75"
	I1027 18:59:58.880114  275311 cri.go:89] found id: "7e472e3b179a0264586ada3ba6103e35c58134a6e668cb3b2b6bf2e04e56940a"
	I1027 18:59:58.880121  275311 cri.go:89] found id: "a9d1dbc41feea92d008c9764ac8993cf482b1be7c976bf1221012c77bb4740da"
	I1027 18:59:58.880127  275311 cri.go:89] found id: "bfa31a6efbb78b65e0d3b92e9683ba4e99525bf68f9335010d133d41e467bee4"
	I1027 18:59:58.880136  275311 cri.go:89] found id: "458122abc2a23963c258beece13449337f47c2ccdc075406236a8a13a8063201"
	I1027 18:59:58.880162  275311 cri.go:89] found id: "2d95c80ef718cfc6804388ffc97159e55cf48ac7c6673231ecb8239d50e7d811"
	I1027 18:59:58.880166  275311 cri.go:89] found id: "a33d881a2daa0476651bd4a08fc46730205cfe03c3681178bb615a5e9635926e"
	I1027 18:59:58.880169  275311 cri.go:89] found id: "8ebcbe2c9975f2a8719a3839dcd24a15c292a81c5c76b171625fac6a82dd851d"
	I1027 18:59:58.880193  275311 cri.go:89] found id: "fe0ea9b1d2cf2cbefbf8c8c1c5b40f8d072efd2d489b534d519c969d9f5078d6"
	I1027 18:59:58.880197  275311 cri.go:89] found id: "f81e4711fbc01c871625a0d7da229dd580feb4c12934b3a38b50966b57a4259b"
	I1027 18:59:58.880201  275311 cri.go:89] found id: "28587c37519da029d28ad619eccddb0f9cf6be1a53a5a3ca6648894a187e90d2"
	I1027 18:59:58.880204  275311 cri.go:89] found id: "1e5034229198563b54bd8e5f173e0492eee9301465992e61e3c894cb36e53dd6"
	I1027 18:59:58.880207  275311 cri.go:89] found id: "d5efbcd6024e7af2a933432a33e2b4197973d93eab2d9c966cb6e488d871f23b"
	I1027 18:59:58.880212  275311 cri.go:89] found id: "4a399d9b30f5e68ec2fe6ac2e4b287b9821a6d963956dc63355919c90bbdbff6"
	I1027 18:59:58.880234  275311 cri.go:89] found id: "752e65ab367c93dbf18c03c14f825896a94d2b17a2a40668c50b5961b7e50972"
	I1027 18:59:58.880244  275311 cri.go:89] found id: "02d35f7174bb35a983c70121d66fa26a8b0d33acfa279db1bc1b3d623b4f8f09"
	I1027 18:59:58.880247  275311 cri.go:89] found id: ""
	I1027 18:59:58.880313  275311 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 18:59:58.895600  275311 out.go:203] 
	W1027 18:59:58.898509  275311 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T18:59:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T18:59:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 18:59:58.898551  275311 out.go:285] * 
	* 
	W1027 18:59:58.904533  275311 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 18:59:58.907516  275311 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-101592 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.06s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.31s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-zkplg" [7684733c-4c6b-4334-96b1-33001520085e] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003392545s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-101592 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-101592 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (295.59768ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:01:09.321459  277208 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:01:09.322205  277208 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:01:09.322243  277208 out.go:374] Setting ErrFile to fd 2...
	I1027 19:01:09.322266  277208 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:01:09.322560  277208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 19:01:09.322898  277208 mustload.go:65] Loading cluster: addons-101592
	I1027 19:01:09.323501  277208 config.go:182] Loaded profile config "addons-101592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:01:09.323546  277208 addons.go:606] checking whether the cluster is paused
	I1027 19:01:09.323717  277208 config.go:182] Loaded profile config "addons-101592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:01:09.323766  277208 host.go:66] Checking if "addons-101592" exists ...
	I1027 19:01:09.324388  277208 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 19:01:09.349329  277208 ssh_runner.go:195] Run: systemctl --version
	I1027 19:01:09.349383  277208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 19:01:09.368283  277208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 19:01:09.481652  277208 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 19:01:09.481729  277208 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 19:01:09.527637  277208 cri.go:89] found id: "3e163786b302c708b17fe3ccf72adc340a75e221cd0896244768e1510b5aa88f"
	I1027 19:01:09.527661  277208 cri.go:89] found id: "838eb4978f205b327ad70c9fe726ab47413794195e774fac18ab9ae207723d6a"
	I1027 19:01:09.527666  277208 cri.go:89] found id: "d6943420da6a4a5c627e3e1859fcefbdc0f116ec559255515e3c76875ef303ec"
	I1027 19:01:09.527671  277208 cri.go:89] found id: "7e2b5aeafd0057d1e7d6c15f07b26a4833c3369e948af52f6ce2a927043ec7ce"
	I1027 19:01:09.527674  277208 cri.go:89] found id: "2aa2da6d8f06b0fc78c255affa19eec7d586298271ac54c0800166ccd1369669"
	I1027 19:01:09.527678  277208 cri.go:89] found id: "50e0b6b85b8a72a9fc162999003b05f4316b2f1d308c55874ee95912e0e1662e"
	I1027 19:01:09.527681  277208 cri.go:89] found id: "cd4e827788ce9c8dcbacf66655ce3bbee1fa66b94eb5c687b755fb2b6f5a5cc1"
	I1027 19:01:09.527684  277208 cri.go:89] found id: "7f055e0b328c77feb86cbf8ba19ed9f73334ca7b97c3bc8004db97b398fbfd75"
	I1027 19:01:09.527687  277208 cri.go:89] found id: "7e472e3b179a0264586ada3ba6103e35c58134a6e668cb3b2b6bf2e04e56940a"
	I1027 19:01:09.527692  277208 cri.go:89] found id: "a9d1dbc41feea92d008c9764ac8993cf482b1be7c976bf1221012c77bb4740da"
	I1027 19:01:09.527696  277208 cri.go:89] found id: "bfa31a6efbb78b65e0d3b92e9683ba4e99525bf68f9335010d133d41e467bee4"
	I1027 19:01:09.527699  277208 cri.go:89] found id: "458122abc2a23963c258beece13449337f47c2ccdc075406236a8a13a8063201"
	I1027 19:01:09.527702  277208 cri.go:89] found id: "2d95c80ef718cfc6804388ffc97159e55cf48ac7c6673231ecb8239d50e7d811"
	I1027 19:01:09.527705  277208 cri.go:89] found id: "a33d881a2daa0476651bd4a08fc46730205cfe03c3681178bb615a5e9635926e"
	I1027 19:01:09.527709  277208 cri.go:89] found id: "8ebcbe2c9975f2a8719a3839dcd24a15c292a81c5c76b171625fac6a82dd851d"
	I1027 19:01:09.527713  277208 cri.go:89] found id: "fe0ea9b1d2cf2cbefbf8c8c1c5b40f8d072efd2d489b534d519c969d9f5078d6"
	I1027 19:01:09.527721  277208 cri.go:89] found id: "f81e4711fbc01c871625a0d7da229dd580feb4c12934b3a38b50966b57a4259b"
	I1027 19:01:09.527727  277208 cri.go:89] found id: "28587c37519da029d28ad619eccddb0f9cf6be1a53a5a3ca6648894a187e90d2"
	I1027 19:01:09.527731  277208 cri.go:89] found id: "1e5034229198563b54bd8e5f173e0492eee9301465992e61e3c894cb36e53dd6"
	I1027 19:01:09.527734  277208 cri.go:89] found id: "d5efbcd6024e7af2a933432a33e2b4197973d93eab2d9c966cb6e488d871f23b"
	I1027 19:01:09.527739  277208 cri.go:89] found id: "4a399d9b30f5e68ec2fe6ac2e4b287b9821a6d963956dc63355919c90bbdbff6"
	I1027 19:01:09.527745  277208 cri.go:89] found id: "752e65ab367c93dbf18c03c14f825896a94d2b17a2a40668c50b5961b7e50972"
	I1027 19:01:09.527751  277208 cri.go:89] found id: "02d35f7174bb35a983c70121d66fa26a8b0d33acfa279db1bc1b3d623b4f8f09"
	I1027 19:01:09.527759  277208 cri.go:89] found id: ""
	I1027 19:01:09.527812  277208 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 19:01:09.542357  277208 out.go:203] 
	W1027 19:01:09.543713  277208 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:01:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:01:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 19:01:09.543737  277208 out.go:285] * 
	* 
	W1027 19:01:09.549800  277208 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 19:01:09.551378  277208 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-101592 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (6.31s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.47s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-101592 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-101592 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101592 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101592 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101592 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101592 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-101592 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [7cddf1da-1ab3-4150-b1aa-1cb1c7306711] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [7cddf1da-1ab3-4150-b1aa-1cb1c7306711] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [7cddf1da-1ab3-4150-b1aa-1cb1c7306711] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004580075s
addons_test.go:967: (dbg) Run:  kubectl --context addons-101592 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-101592 ssh "cat /opt/local-path-provisioner/pvc-c9f40c89-0f13-48bb-bf71-f70c4746ee6e_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-101592 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-101592 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-101592 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-101592 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (268.133905ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:01:03.043794  277104 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:01:03.044657  277104 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:01:03.044672  277104 out.go:374] Setting ErrFile to fd 2...
	I1027 19:01:03.044678  277104 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:01:03.044955  277104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 19:01:03.045322  277104 mustload.go:65] Loading cluster: addons-101592
	I1027 19:01:03.045701  277104 config.go:182] Loaded profile config "addons-101592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:01:03.045722  277104 addons.go:606] checking whether the cluster is paused
	I1027 19:01:03.045825  277104 config.go:182] Loaded profile config "addons-101592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:01:03.045843  277104 host.go:66] Checking if "addons-101592" exists ...
	I1027 19:01:03.046294  277104 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 19:01:03.064115  277104 ssh_runner.go:195] Run: systemctl --version
	I1027 19:01:03.064179  277104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 19:01:03.081842  277104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 19:01:03.185830  277104 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 19:01:03.185921  277104 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 19:01:03.216148  277104 cri.go:89] found id: "3e163786b302c708b17fe3ccf72adc340a75e221cd0896244768e1510b5aa88f"
	I1027 19:01:03.216213  277104 cri.go:89] found id: "838eb4978f205b327ad70c9fe726ab47413794195e774fac18ab9ae207723d6a"
	I1027 19:01:03.216231  277104 cri.go:89] found id: "d6943420da6a4a5c627e3e1859fcefbdc0f116ec559255515e3c76875ef303ec"
	I1027 19:01:03.216236  277104 cri.go:89] found id: "7e2b5aeafd0057d1e7d6c15f07b26a4833c3369e948af52f6ce2a927043ec7ce"
	I1027 19:01:03.216239  277104 cri.go:89] found id: "2aa2da6d8f06b0fc78c255affa19eec7d586298271ac54c0800166ccd1369669"
	I1027 19:01:03.216243  277104 cri.go:89] found id: "50e0b6b85b8a72a9fc162999003b05f4316b2f1d308c55874ee95912e0e1662e"
	I1027 19:01:03.216248  277104 cri.go:89] found id: "cd4e827788ce9c8dcbacf66655ce3bbee1fa66b94eb5c687b755fb2b6f5a5cc1"
	I1027 19:01:03.216252  277104 cri.go:89] found id: "7f055e0b328c77feb86cbf8ba19ed9f73334ca7b97c3bc8004db97b398fbfd75"
	I1027 19:01:03.216255  277104 cri.go:89] found id: "7e472e3b179a0264586ada3ba6103e35c58134a6e668cb3b2b6bf2e04e56940a"
	I1027 19:01:03.216271  277104 cri.go:89] found id: "a9d1dbc41feea92d008c9764ac8993cf482b1be7c976bf1221012c77bb4740da"
	I1027 19:01:03.216289  277104 cri.go:89] found id: "bfa31a6efbb78b65e0d3b92e9683ba4e99525bf68f9335010d133d41e467bee4"
	I1027 19:01:03.216293  277104 cri.go:89] found id: "458122abc2a23963c258beece13449337f47c2ccdc075406236a8a13a8063201"
	I1027 19:01:03.216296  277104 cri.go:89] found id: "2d95c80ef718cfc6804388ffc97159e55cf48ac7c6673231ecb8239d50e7d811"
	I1027 19:01:03.216299  277104 cri.go:89] found id: "a33d881a2daa0476651bd4a08fc46730205cfe03c3681178bb615a5e9635926e"
	I1027 19:01:03.216303  277104 cri.go:89] found id: "8ebcbe2c9975f2a8719a3839dcd24a15c292a81c5c76b171625fac6a82dd851d"
	I1027 19:01:03.216312  277104 cri.go:89] found id: "fe0ea9b1d2cf2cbefbf8c8c1c5b40f8d072efd2d489b534d519c969d9f5078d6"
	I1027 19:01:03.216324  277104 cri.go:89] found id: "f81e4711fbc01c871625a0d7da229dd580feb4c12934b3a38b50966b57a4259b"
	I1027 19:01:03.216329  277104 cri.go:89] found id: "28587c37519da029d28ad619eccddb0f9cf6be1a53a5a3ca6648894a187e90d2"
	I1027 19:01:03.216332  277104 cri.go:89] found id: "1e5034229198563b54bd8e5f173e0492eee9301465992e61e3c894cb36e53dd6"
	I1027 19:01:03.216335  277104 cri.go:89] found id: "d5efbcd6024e7af2a933432a33e2b4197973d93eab2d9c966cb6e488d871f23b"
	I1027 19:01:03.216340  277104 cri.go:89] found id: "4a399d9b30f5e68ec2fe6ac2e4b287b9821a6d963956dc63355919c90bbdbff6"
	I1027 19:01:03.216343  277104 cri.go:89] found id: "752e65ab367c93dbf18c03c14f825896a94d2b17a2a40668c50b5961b7e50972"
	I1027 19:01:03.216346  277104 cri.go:89] found id: "02d35f7174bb35a983c70121d66fa26a8b0d33acfa279db1bc1b3d623b4f8f09"
	I1027 19:01:03.216350  277104 cri.go:89] found id: ""
	I1027 19:01:03.216401  277104 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 19:01:03.235885  277104 out.go:203] 
	W1027 19:01:03.237621  277104 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:01:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:01:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 19:01:03.237647  277104 out.go:285] * 
	* 
	W1027 19:01:03.243645  277104 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 19:01:03.245544  277104 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-101592 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.47s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-sghjb" [4b33da98-82a0-4a8e-aa0c-c6cf4878e558] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.00333324s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-101592 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-101592 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (265.920376ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:00:48.297924  276733 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:00:48.298961  276733 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:00:48.299036  276733 out.go:374] Setting ErrFile to fd 2...
	I1027 19:00:48.299058  276733 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:00:48.299332  276733 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 19:00:48.299662  276733 mustload.go:65] Loading cluster: addons-101592
	I1027 19:00:48.300125  276733 config.go:182] Loaded profile config "addons-101592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:00:48.300162  276733 addons.go:606] checking whether the cluster is paused
	I1027 19:00:48.300287  276733 config.go:182] Loaded profile config "addons-101592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:00:48.300326  276733 host.go:66] Checking if "addons-101592" exists ...
	I1027 19:00:48.300793  276733 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 19:00:48.319045  276733 ssh_runner.go:195] Run: systemctl --version
	I1027 19:00:48.319122  276733 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 19:00:48.339531  276733 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 19:00:48.445363  276733 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 19:00:48.445458  276733 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 19:00:48.474768  276733 cri.go:89] found id: "3e163786b302c708b17fe3ccf72adc340a75e221cd0896244768e1510b5aa88f"
	I1027 19:00:48.474798  276733 cri.go:89] found id: "838eb4978f205b327ad70c9fe726ab47413794195e774fac18ab9ae207723d6a"
	I1027 19:00:48.474805  276733 cri.go:89] found id: "d6943420da6a4a5c627e3e1859fcefbdc0f116ec559255515e3c76875ef303ec"
	I1027 19:00:48.474808  276733 cri.go:89] found id: "7e2b5aeafd0057d1e7d6c15f07b26a4833c3369e948af52f6ce2a927043ec7ce"
	I1027 19:00:48.474812  276733 cri.go:89] found id: "2aa2da6d8f06b0fc78c255affa19eec7d586298271ac54c0800166ccd1369669"
	I1027 19:00:48.474815  276733 cri.go:89] found id: "50e0b6b85b8a72a9fc162999003b05f4316b2f1d308c55874ee95912e0e1662e"
	I1027 19:00:48.474819  276733 cri.go:89] found id: "cd4e827788ce9c8dcbacf66655ce3bbee1fa66b94eb5c687b755fb2b6f5a5cc1"
	I1027 19:00:48.474822  276733 cri.go:89] found id: "7f055e0b328c77feb86cbf8ba19ed9f73334ca7b97c3bc8004db97b398fbfd75"
	I1027 19:00:48.474826  276733 cri.go:89] found id: "7e472e3b179a0264586ada3ba6103e35c58134a6e668cb3b2b6bf2e04e56940a"
	I1027 19:00:48.474832  276733 cri.go:89] found id: "a9d1dbc41feea92d008c9764ac8993cf482b1be7c976bf1221012c77bb4740da"
	I1027 19:00:48.474835  276733 cri.go:89] found id: "bfa31a6efbb78b65e0d3b92e9683ba4e99525bf68f9335010d133d41e467bee4"
	I1027 19:00:48.474840  276733 cri.go:89] found id: "458122abc2a23963c258beece13449337f47c2ccdc075406236a8a13a8063201"
	I1027 19:00:48.474848  276733 cri.go:89] found id: "2d95c80ef718cfc6804388ffc97159e55cf48ac7c6673231ecb8239d50e7d811"
	I1027 19:00:48.474852  276733 cri.go:89] found id: "a33d881a2daa0476651bd4a08fc46730205cfe03c3681178bb615a5e9635926e"
	I1027 19:00:48.474855  276733 cri.go:89] found id: "8ebcbe2c9975f2a8719a3839dcd24a15c292a81c5c76b171625fac6a82dd851d"
	I1027 19:00:48.474860  276733 cri.go:89] found id: "fe0ea9b1d2cf2cbefbf8c8c1c5b40f8d072efd2d489b534d519c969d9f5078d6"
	I1027 19:00:48.474866  276733 cri.go:89] found id: "f81e4711fbc01c871625a0d7da229dd580feb4c12934b3a38b50966b57a4259b"
	I1027 19:00:48.474871  276733 cri.go:89] found id: "28587c37519da029d28ad619eccddb0f9cf6be1a53a5a3ca6648894a187e90d2"
	I1027 19:00:48.474874  276733 cri.go:89] found id: "1e5034229198563b54bd8e5f173e0492eee9301465992e61e3c894cb36e53dd6"
	I1027 19:00:48.474877  276733 cri.go:89] found id: "d5efbcd6024e7af2a933432a33e2b4197973d93eab2d9c966cb6e488d871f23b"
	I1027 19:00:48.474881  276733 cri.go:89] found id: "4a399d9b30f5e68ec2fe6ac2e4b287b9821a6d963956dc63355919c90bbdbff6"
	I1027 19:00:48.474884  276733 cri.go:89] found id: "752e65ab367c93dbf18c03c14f825896a94d2b17a2a40668c50b5961b7e50972"
	I1027 19:00:48.474888  276733 cri.go:89] found id: "02d35f7174bb35a983c70121d66fa26a8b0d33acfa279db1bc1b3d623b4f8f09"
	I1027 19:00:48.474891  276733 cri.go:89] found id: ""
	I1027 19:00:48.474951  276733 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 19:00:48.491173  276733 out.go:203] 
	W1027 19:00:48.494149  276733 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:00:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:00:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 19:00:48.494182  276733 out.go:285] * 
	* 
	W1027 19:00:48.500256  276733 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 19:00:48.503465  276733 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-101592 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.27s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-lhv5m" [da9667e7-3b25-46aa-a743-5b4e3f18b8fe] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004039765s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-101592 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-101592 addons disable yakd --alsologtostderr -v=1: exit status 11 (266.345857ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:00:54.558575  276807 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:00:54.559369  276807 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:00:54.559385  276807 out.go:374] Setting ErrFile to fd 2...
	I1027 19:00:54.559391  276807 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:00:54.559648  276807 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 19:00:54.559928  276807 mustload.go:65] Loading cluster: addons-101592
	I1027 19:00:54.560280  276807 config.go:182] Loaded profile config "addons-101592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:00:54.560297  276807 addons.go:606] checking whether the cluster is paused
	I1027 19:00:54.560401  276807 config.go:182] Loaded profile config "addons-101592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:00:54.560415  276807 host.go:66] Checking if "addons-101592" exists ...
	I1027 19:00:54.560891  276807 cli_runner.go:164] Run: docker container inspect addons-101592 --format={{.State.Status}}
	I1027 19:00:54.578950  276807 ssh_runner.go:195] Run: systemctl --version
	I1027 19:00:54.579050  276807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-101592
	I1027 19:00:54.598731  276807 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/addons-101592/id_rsa Username:docker}
	I1027 19:00:54.705596  276807 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 19:00:54.705702  276807 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 19:00:54.739768  276807 cri.go:89] found id: "3e163786b302c708b17fe3ccf72adc340a75e221cd0896244768e1510b5aa88f"
	I1027 19:00:54.739801  276807 cri.go:89] found id: "838eb4978f205b327ad70c9fe726ab47413794195e774fac18ab9ae207723d6a"
	I1027 19:00:54.739806  276807 cri.go:89] found id: "d6943420da6a4a5c627e3e1859fcefbdc0f116ec559255515e3c76875ef303ec"
	I1027 19:00:54.739811  276807 cri.go:89] found id: "7e2b5aeafd0057d1e7d6c15f07b26a4833c3369e948af52f6ce2a927043ec7ce"
	I1027 19:00:54.739814  276807 cri.go:89] found id: "2aa2da6d8f06b0fc78c255affa19eec7d586298271ac54c0800166ccd1369669"
	I1027 19:00:54.739818  276807 cri.go:89] found id: "50e0b6b85b8a72a9fc162999003b05f4316b2f1d308c55874ee95912e0e1662e"
	I1027 19:00:54.739821  276807 cri.go:89] found id: "cd4e827788ce9c8dcbacf66655ce3bbee1fa66b94eb5c687b755fb2b6f5a5cc1"
	I1027 19:00:54.739824  276807 cri.go:89] found id: "7f055e0b328c77feb86cbf8ba19ed9f73334ca7b97c3bc8004db97b398fbfd75"
	I1027 19:00:54.739827  276807 cri.go:89] found id: "7e472e3b179a0264586ada3ba6103e35c58134a6e668cb3b2b6bf2e04e56940a"
	I1027 19:00:54.739833  276807 cri.go:89] found id: "a9d1dbc41feea92d008c9764ac8993cf482b1be7c976bf1221012c77bb4740da"
	I1027 19:00:54.739837  276807 cri.go:89] found id: "bfa31a6efbb78b65e0d3b92e9683ba4e99525bf68f9335010d133d41e467bee4"
	I1027 19:00:54.739840  276807 cri.go:89] found id: "458122abc2a23963c258beece13449337f47c2ccdc075406236a8a13a8063201"
	I1027 19:00:54.739842  276807 cri.go:89] found id: "2d95c80ef718cfc6804388ffc97159e55cf48ac7c6673231ecb8239d50e7d811"
	I1027 19:00:54.739845  276807 cri.go:89] found id: "a33d881a2daa0476651bd4a08fc46730205cfe03c3681178bb615a5e9635926e"
	I1027 19:00:54.739848  276807 cri.go:89] found id: "8ebcbe2c9975f2a8719a3839dcd24a15c292a81c5c76b171625fac6a82dd851d"
	I1027 19:00:54.739854  276807 cri.go:89] found id: "fe0ea9b1d2cf2cbefbf8c8c1c5b40f8d072efd2d489b534d519c969d9f5078d6"
	I1027 19:00:54.739857  276807 cri.go:89] found id: "f81e4711fbc01c871625a0d7da229dd580feb4c12934b3a38b50966b57a4259b"
	I1027 19:00:54.739861  276807 cri.go:89] found id: "28587c37519da029d28ad619eccddb0f9cf6be1a53a5a3ca6648894a187e90d2"
	I1027 19:00:54.739864  276807 cri.go:89] found id: "1e5034229198563b54bd8e5f173e0492eee9301465992e61e3c894cb36e53dd6"
	I1027 19:00:54.739867  276807 cri.go:89] found id: "d5efbcd6024e7af2a933432a33e2b4197973d93eab2d9c966cb6e488d871f23b"
	I1027 19:00:54.739872  276807 cri.go:89] found id: "4a399d9b30f5e68ec2fe6ac2e4b287b9821a6d963956dc63355919c90bbdbff6"
	I1027 19:00:54.739882  276807 cri.go:89] found id: "752e65ab367c93dbf18c03c14f825896a94d2b17a2a40668c50b5961b7e50972"
	I1027 19:00:54.739886  276807 cri.go:89] found id: "02d35f7174bb35a983c70121d66fa26a8b0d33acfa279db1bc1b3d623b4f8f09"
	I1027 19:00:54.739889  276807 cri.go:89] found id: ""
	I1027 19:00:54.739940  276807 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 19:00:54.759304  276807 out.go:203] 
	W1027 19:00:54.765138  276807 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:00:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:00:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 19:00:54.765165  276807 out.go:285] * 
	* 
	W1027 19:00:54.771249  276807 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 19:00:54.776241  276807 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-101592 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-647336 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-647336 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-75nls" [183cbeda-1c17-4b7c-b154-aa2441fb4ace] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1027 19:07:28.957739  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:09:45.062616  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:10:12.799173  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:14:45.063146  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-647336 -n functional-647336
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-27 19:17:27.348306134 +0000 UTC m=+1249.498718400
functional_test.go:1645: (dbg) Run:  kubectl --context functional-647336 describe po hello-node-connect-7d85dfc575-75nls -n default
functional_test.go:1645: (dbg) kubectl --context functional-647336 describe po hello-node-connect-7d85dfc575-75nls -n default:
Name:             hello-node-connect-7d85dfc575-75nls
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-647336/192.168.49.2
Start Time:       Mon, 27 Oct 2025 19:07:26 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-blqtn (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-blqtn:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-75nls to functional-647336
Normal   Pulling    7m12s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m12s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m12s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m51s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m37s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-647336 logs hello-node-connect-7d85dfc575-75nls -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-647336 logs hello-node-connect-7d85dfc575-75nls -n default: exit status 1 (84.906071ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-75nls" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-647336 logs hello-node-connect-7d85dfc575-75nls -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-647336 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-75nls
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-647336/192.168.49.2
Start Time:       Mon, 27 Oct 2025 19:07:26 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-blqtn (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-blqtn:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-75nls to functional-647336
Normal   Pulling    7m12s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m12s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m12s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m51s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m37s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-647336 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-647336 logs -l app=hello-node-connect: exit status 1 (89.908765ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-75nls" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-647336 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-647336 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.109.20.196
IPs:                      10.109.20.196
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32096/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-647336
helpers_test.go:243: (dbg) docker inspect functional-647336:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9708eb6e1b266c76a526ae8131106ad5056113fb1d6a5005f0aa4ca8ce31979d",
	        "Created": "2025-10-27T19:04:06.393045668Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 283573,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T19:04:06.454862361Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/9708eb6e1b266c76a526ae8131106ad5056113fb1d6a5005f0aa4ca8ce31979d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9708eb6e1b266c76a526ae8131106ad5056113fb1d6a5005f0aa4ca8ce31979d/hostname",
	        "HostsPath": "/var/lib/docker/containers/9708eb6e1b266c76a526ae8131106ad5056113fb1d6a5005f0aa4ca8ce31979d/hosts",
	        "LogPath": "/var/lib/docker/containers/9708eb6e1b266c76a526ae8131106ad5056113fb1d6a5005f0aa4ca8ce31979d/9708eb6e1b266c76a526ae8131106ad5056113fb1d6a5005f0aa4ca8ce31979d-json.log",
	        "Name": "/functional-647336",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-647336:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-647336",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9708eb6e1b266c76a526ae8131106ad5056113fb1d6a5005f0aa4ca8ce31979d",
	                "LowerDir": "/var/lib/docker/overlay2/68b0575c811ea22682da169c1989757dccd2ecbfa1cde9230caa6facb965c213-init/diff:/var/lib/docker/overlay2/0567218bbe05e8b59b2aef7ad82032d6716040c1f9d9d73e7de8682101079f76/diff",
	                "MergedDir": "/var/lib/docker/overlay2/68b0575c811ea22682da169c1989757dccd2ecbfa1cde9230caa6facb965c213/merged",
	                "UpperDir": "/var/lib/docker/overlay2/68b0575c811ea22682da169c1989757dccd2ecbfa1cde9230caa6facb965c213/diff",
	                "WorkDir": "/var/lib/docker/overlay2/68b0575c811ea22682da169c1989757dccd2ecbfa1cde9230caa6facb965c213/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-647336",
	                "Source": "/var/lib/docker/volumes/functional-647336/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-647336",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-647336",
	                "name.minikube.sigs.k8s.io": "functional-647336",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "417ab1b1b020ecbae1a112654dfde04ce39fd6767f52a6d85733f144e7d18858",
	            "SandboxKey": "/var/run/docker/netns/417ab1b1b020",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-647336": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:54:f4:73:91:84",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fdcf0f26ba0484693dde1b00a7f9c91724134824dc04af9a3fa369b7536519a3",
	                    "EndpointID": "e51b2ad47567f2b052e088c7935494f06fe59b9381b0df9f549c923a14d3422b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-647336",
	                        "9708eb6e1b26"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-647336 -n functional-647336
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-647336 logs -n 25: (1.469271108s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-647336 ssh findmnt -T /mount-9p | grep 9p                                                               │ functional-647336 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ ssh            │ functional-647336 ssh -- ls -la /mount-9p                                                                          │ functional-647336 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ ssh            │ functional-647336 ssh sudo umount -f /mount-9p                                                                     │ functional-647336 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │                     │
	│ mount          │ -p functional-647336 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1549481524/001:/mount1 --alsologtostderr -v=1 │ functional-647336 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │                     │
	│ ssh            │ functional-647336 ssh findmnt -T /mount1                                                                           │ functional-647336 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │                     │
	│ mount          │ -p functional-647336 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1549481524/001:/mount2 --alsologtostderr -v=1 │ functional-647336 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │                     │
	│ mount          │ -p functional-647336 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1549481524/001:/mount3 --alsologtostderr -v=1 │ functional-647336 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │                     │
	│ ssh            │ functional-647336 ssh findmnt -T /mount1                                                                           │ functional-647336 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ ssh            │ functional-647336 ssh findmnt -T /mount2                                                                           │ functional-647336 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ ssh            │ functional-647336 ssh findmnt -T /mount3                                                                           │ functional-647336 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ mount          │ -p functional-647336 --kill=true                                                                                   │ functional-647336 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │                     │
	│ start          │ -p functional-647336 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-647336 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │                     │
	│ start          │ -p functional-647336 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                    │ functional-647336 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │                     │
	│ start          │ -p functional-647336 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-647336 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-647336 --alsologtostderr -v=1                                                     │ functional-647336 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ update-context │ functional-647336 update-context --alsologtostderr -v=2                                                            │ functional-647336 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ update-context │ functional-647336 update-context --alsologtostderr -v=2                                                            │ functional-647336 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ update-context │ functional-647336 update-context --alsologtostderr -v=2                                                            │ functional-647336 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ image          │ functional-647336 image ls --format short --alsologtostderr                                                        │ functional-647336 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ image          │ functional-647336 image ls --format yaml --alsologtostderr                                                         │ functional-647336 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ ssh            │ functional-647336 ssh pgrep buildkitd                                                                              │ functional-647336 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │                     │
	│ image          │ functional-647336 image build -t localhost/my-image:functional-647336 testdata/build --alsologtostderr             │ functional-647336 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ image          │ functional-647336 image ls                                                                                         │ functional-647336 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ image          │ functional-647336 image ls --format json --alsologtostderr                                                         │ functional-647336 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	│ image          │ functional-647336 image ls --format table --alsologtostderr                                                        │ functional-647336 │ jenkins │ v1.37.0 │ 27 Oct 25 19:17 UTC │ 27 Oct 25 19:17 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 19:17:05
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 19:17:05.990636  295276 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:17:05.990881  295276 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:17:05.990911  295276 out.go:374] Setting ErrFile to fd 2...
	I1027 19:17:05.990930  295276 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:17:05.991969  295276 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 19:17:05.992528  295276 out.go:368] Setting JSON to false
	I1027 19:17:05.993597  295276 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":7178,"bootTime":1761585448,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1027 19:17:05.993704  295276 start.go:141] virtualization:  
	I1027 19:17:05.997008  295276 out.go:179] * [functional-647336] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1027 19:17:06.000937  295276 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 19:17:06.001118  295276 notify.go:220] Checking for updates...
	I1027 19:17:06.008468  295276 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 19:17:06.011457  295276 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 19:17:06.014482  295276 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-266035/.minikube
	I1027 19:17:06.018386  295276 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 19:17:06.022470  295276 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 19:17:06.026193  295276 config.go:182] Loaded profile config "functional-647336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:17:06.026886  295276 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 19:17:06.063346  295276 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 19:17:06.063525  295276 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:17:06.131252  295276 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-27 19:17:06.120902628 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 19:17:06.131376  295276 docker.go:318] overlay module found
	I1027 19:17:06.134543  295276 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1027 19:17:06.137481  295276 start.go:305] selected driver: docker
	I1027 19:17:06.137506  295276 start.go:925] validating driver "docker" against &{Name:functional-647336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-647336 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:17:06.137613  295276 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 19:17:06.141330  295276 out.go:203] 
	W1027 19:17:06.144369  295276 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1027 19:17:06.147094  295276 out.go:203] 
	
	
	==> CRI-O <==
	Oct 27 19:17:12 functional-647336 crio[3516]: time="2025-10-27T19:17:12.12860619Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=6d357241-12c8-402b-b818-8516ccfe6ba6 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:17:12 functional-647336 crio[3516]: time="2025-10-27T19:17:12.130575322Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=59697c7a-d877-4b3a-9866-b5fe04cd9142 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:17:12 functional-647336 crio[3516]: time="2025-10-27T19:17:12.132997463Z" level=info msg="Pulling image: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=3eb24e6f-e777-4e86-a10d-bf9e521ecc11 name=/runtime.v1.ImageService/PullImage
	Oct 27 19:17:12 functional-647336 crio[3516]: time="2025-10-27T19:17:12.13552899Z" level=info msg="Trying to access \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Oct 27 19:17:12 functional-647336 crio[3516]: time="2025-10-27T19:17:12.13886111Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5fz9q/kubernetes-dashboard" id=c9060cf3-a741-413f-90c6-8995226ca726 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:17:12 functional-647336 crio[3516]: time="2025-10-27T19:17:12.139148429Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:17:12 functional-647336 crio[3516]: time="2025-10-27T19:17:12.144377683Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:17:12 functional-647336 crio[3516]: time="2025-10-27T19:17:12.144595711Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/13c65b86c035b2dcfdcca27155aff2c449df45b9fcdafcc3dc99f836a6ad48ae/merged/etc/group: no such file or directory"
	Oct 27 19:17:12 functional-647336 crio[3516]: time="2025-10-27T19:17:12.144968213Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:17:12 functional-647336 crio[3516]: time="2025-10-27T19:17:12.172609814Z" level=info msg="Created container 1be10bf92181ebd6f981f477b9d7b9d88a9840ba8df93e69ef68e5dbf42bccad: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5fz9q/kubernetes-dashboard" id=c9060cf3-a741-413f-90c6-8995226ca726 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:17:12 functional-647336 crio[3516]: time="2025-10-27T19:17:12.173516348Z" level=info msg="Starting container: 1be10bf92181ebd6f981f477b9d7b9d88a9840ba8df93e69ef68e5dbf42bccad" id=8ade818f-3a27-472a-89d7-5d2b176f4486 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:17:12 functional-647336 crio[3516]: time="2025-10-27T19:17:12.175489681Z" level=info msg="Started container" PID=6752 containerID=1be10bf92181ebd6f981f477b9d7b9d88a9840ba8df93e69ef68e5dbf42bccad description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5fz9q/kubernetes-dashboard id=8ade818f-3a27-472a-89d7-5d2b176f4486 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8e6ece54a4850b5f4e8c6e6a2a1fcba353697f204f9430c2068fcb42c1b1f01a
	Oct 27 19:17:12 functional-647336 crio[3516]: time="2025-10-27T19:17:12.426457539Z" level=info msg="Image operating system mismatch: image uses OS \"linux\"+architecture \"amd64\"+\"\", expecting one of \"linux+arm64+\\\"v8\\\", linux+arm64+\\\"\\\"\""
	Oct 27 19:17:13 functional-647336 crio[3516]: time="2025-10-27T19:17:13.384422004Z" level=info msg="Pulled image: docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a" id=3eb24e6f-e777-4e86-a10d-bf9e521ecc11 name=/runtime.v1.ImageService/PullImage
	Oct 27 19:17:13 functional-647336 crio[3516]: time="2025-10-27T19:17:13.385328997Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=c293c2d8-cb1e-494c-bbb1-f7a2beb11ffa name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:17:13 functional-647336 crio[3516]: time="2025-10-27T19:17:13.387454884Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=ce6efc02-d0a4-4e06-af03-ee2b67f41676 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:17:13 functional-647336 crio[3516]: time="2025-10-27T19:17:13.394721337Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-bhq9d/dashboard-metrics-scraper" id=f8eb3d88-c6b3-41d1-a232-a35f42d9bb79 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:17:13 functional-647336 crio[3516]: time="2025-10-27T19:17:13.394832973Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:17:13 functional-647336 crio[3516]: time="2025-10-27T19:17:13.400195417Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:17:13 functional-647336 crio[3516]: time="2025-10-27T19:17:13.400530439Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/6b8dfc0909ff0b494a0a7cc9da687e97ce9b7b761a2c70318cc3e0dac86abacb/merged/etc/group: no such file or directory"
	Oct 27 19:17:13 functional-647336 crio[3516]: time="2025-10-27T19:17:13.400956896Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:17:13 functional-647336 crio[3516]: time="2025-10-27T19:17:13.41617752Z" level=info msg="Created container a4fedb067ac5096a5e570aab49ab05dffd676a9349ba0984ac70477e8b051fb2: kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-bhq9d/dashboard-metrics-scraper" id=f8eb3d88-c6b3-41d1-a232-a35f42d9bb79 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:17:13 functional-647336 crio[3516]: time="2025-10-27T19:17:13.418622134Z" level=info msg="Starting container: a4fedb067ac5096a5e570aab49ab05dffd676a9349ba0984ac70477e8b051fb2" id=19c9eba7-0a90-4c0d-94a8-fe7eb55b649e name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:17:13 functional-647336 crio[3516]: time="2025-10-27T19:17:13.42168492Z" level=info msg="Started container" PID=6793 containerID=a4fedb067ac5096a5e570aab49ab05dffd676a9349ba0984ac70477e8b051fb2 description=kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-bhq9d/dashboard-metrics-scraper id=19c9eba7-0a90-4c0d-94a8-fe7eb55b649e name=/runtime.v1.RuntimeService/StartContainer sandboxID=bb23cb0d062c0662e2ee8b66ec99baf53612ec7d8514f02925e11742f0eb3f17
	Oct 27 19:17:17 functional-647336 crio[3516]: time="2025-10-27T19:17:17.658570821Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=416fce4f-93c6-4fe8-a09a-ff4586df8ad5 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	a4fedb067ac50       docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a   15 seconds ago      Running             dashboard-metrics-scraper   0                   bb23cb0d062c0       dashboard-metrics-scraper-77bf4d6c4c-bhq9d   kubernetes-dashboard
	1be10bf92181e       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf         16 seconds ago      Running             kubernetes-dashboard        0                   8e6ece54a4850       kubernetes-dashboard-855c9754f9-5fz9q        kubernetes-dashboard
	3241d55f30d37       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e              30 seconds ago      Exited              mount-munger                0                   ec3119611523b       busybox-mount                                default
	8a9733b502895       docker.io/library/nginx@sha256:68e62e210589c349f01d82308b45fbd6fb9b855f8b12cb27e11ad48dbfd0e43f                  10 minutes ago      Running             myfrontend                  0                   b0b057dcb8aa4       sp-pod                                       default
	f0a76e21d67a7       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0                  10 minutes ago      Running             nginx                       0                   3b9041aadac5a       nginx-svc                                    default
	ea240287195be       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                 11 minutes ago      Running             kindnet-cni                 2                   1504356da5b39       kindnet-x6f8k                                kube-system
	fefcbf4c9e6aa       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                 11 minutes ago      Running             storage-provisioner         3                   2128db8caeed7       storage-provisioner                          kube-system
	379b8fbd0bb7a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                 11 minutes ago      Running             kube-proxy                  2                   d17bf18b3dd89       kube-proxy-wwpjs                             kube-system
	a52c78aae0058       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                 11 minutes ago      Running             coredns                     2                   33d941b543e93       coredns-66bc5c9577-ql5z4                     kube-system
	0ff2d0054728f       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                 11 minutes ago      Running             kube-apiserver              0                   c6cbaf04105e9       kube-apiserver-functional-647336             kube-system
	4421a43a7a4ec       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                 11 minutes ago      Running             kube-scheduler              2                   8a8cb48a522c2       kube-scheduler-functional-647336             kube-system
	88c93ecd26aac       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                 11 minutes ago      Running             kube-controller-manager     2                   121a8b857def6       kube-controller-manager-functional-647336    kube-system
	181fecf7fe386       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                 11 minutes ago      Running             etcd                        2                   225c46ef815b1       etcd-functional-647336                       kube-system
	70e036cf2b981       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                 11 minutes ago      Exited              storage-provisioner         2                   2128db8caeed7       storage-provisioner                          kube-system
	97cf3b5f20ba0       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                 11 minutes ago      Exited              kindnet-cni                 1                   1504356da5b39       kindnet-x6f8k                                kube-system
	87b4dbdd24a22       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                 12 minutes ago      Exited              kube-controller-manager     1                   121a8b857def6       kube-controller-manager-functional-647336    kube-system
	fa7d5afec83bc       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                 12 minutes ago      Exited              etcd                        1                   225c46ef815b1       etcd-functional-647336                       kube-system
	55bf995090f82       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                 12 minutes ago      Exited              kube-scheduler              1                   8a8cb48a522c2       kube-scheduler-functional-647336             kube-system
	84f4617f9f819       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                 12 minutes ago      Exited              coredns                     1                   33d941b543e93       coredns-66bc5c9577-ql5z4                     kube-system
	b8ddcf50556db       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                 12 minutes ago      Exited              kube-proxy                  1                   d17bf18b3dd89       kube-proxy-wwpjs                             kube-system
	
	
	==> coredns [84f4617f9f819e93db8c3ee20fd4140159239b39b2632b5796027685fcee474a] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40185 - 39883 "HINFO IN 7131508385218308360.5381844559897655318. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.005936513s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a52c78aae0058916c763608357df1d3367643675f351775b75c928fe203b580e] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50710 - 49597 "HINFO IN 1694987079841356311.8419830342536584023. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017094289s
	
	
	==> describe nodes <==
	Name:               functional-647336
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-647336
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=functional-647336
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T19_04_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 19:04:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-647336
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 19:17:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 19:17:21 +0000   Mon, 27 Oct 2025 19:04:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 19:17:21 +0000   Mon, 27 Oct 2025 19:04:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 19:17:21 +0000   Mon, 27 Oct 2025 19:04:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 19:17:21 +0000   Mon, 27 Oct 2025 19:05:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-647336
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                43275866-adfc-404a-b141-79e1498d056b
	  Boot ID:                    23ea05b4-8203-4fb9-a84a-5deb4b091cbb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-7wbx7                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-75nls           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-ql5z4                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-647336                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-x6f8k                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-647336              250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-functional-647336     200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-wwpjs                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-647336              100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-bhq9d    0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-5fz9q         0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Warning  CgroupV1                 13m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node functional-647336 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node functional-647336 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x8 over 13m)  kubelet          Node functional-647336 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-647336 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-647336 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-647336 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-647336 event: Registered Node functional-647336 in Controller
	  Normal   NodeReady                12m                kubelet          Node functional-647336 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-647336 event: Registered Node functional-647336 in Controller
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-647336 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 11m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-647336 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node functional-647336 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11m                node-controller  Node functional-647336 event: Registered Node functional-647336 in Controller
	
	
	==> dmesg <==
	[Oct27 18:29] overlayfs: idmapped layers are currently not supported
	[Oct27 18:30] overlayfs: idmapped layers are currently not supported
	[ +18.215952] overlayfs: idmapped layers are currently not supported
	[Oct27 18:31] overlayfs: idmapped layers are currently not supported
	[ +35.797174] overlayfs: idmapped layers are currently not supported
	[Oct27 18:32] overlayfs: idmapped layers are currently not supported
	[Oct27 18:34] overlayfs: idmapped layers are currently not supported
	[ +38.178588] overlayfs: idmapped layers are currently not supported
	[Oct27 18:36] overlayfs: idmapped layers are currently not supported
	[ +29.649930] overlayfs: idmapped layers are currently not supported
	[Oct27 18:37] overlayfs: idmapped layers are currently not supported
	[Oct27 18:38] overlayfs: idmapped layers are currently not supported
	[ +26.025304] overlayfs: idmapped layers are currently not supported
	[Oct27 18:39] overlayfs: idmapped layers are currently not supported
	[  +8.720024] overlayfs: idmapped layers are currently not supported
	[Oct27 18:40] overlayfs: idmapped layers are currently not supported
	[Oct27 18:41] overlayfs: idmapped layers are currently not supported
	[Oct27 18:42] overlayfs: idmapped layers are currently not supported
	[Oct27 18:43] overlayfs: idmapped layers are currently not supported
	[Oct27 18:44] overlayfs: idmapped layers are currently not supported
	[ +50.528384] overlayfs: idmapped layers are currently not supported
	[Oct27 18:56] kauditd_printk_skb: 8 callbacks suppressed
	[Oct27 18:57] overlayfs: idmapped layers are currently not supported
	[Oct27 19:03] overlayfs: idmapped layers are currently not supported
	[Oct27 19:04] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [181fecf7fe386763beb184ea0291e75c61b9a42691db172501a56ef92cdc3988] <==
	{"level":"warn","ts":"2025-10-27T19:06:17.259856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:06:17.277496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:06:17.290919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:06:17.316222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:06:17.332299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:06:17.377045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:06:17.391858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:06:17.413954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:06:17.425450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:06:17.450571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:06:17.502311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:06:17.521677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:06:17.534231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:06:17.555961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:06:17.568054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:06:17.590654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:06:17.613218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:06:17.625592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:06:17.663476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:06:17.691350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:06:17.733980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:06:17.818447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58750","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-27T19:16:16.453785Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1132}
	{"level":"info","ts":"2025-10-27T19:16:16.477158Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1132,"took":"23.014003ms","hash":2822926116,"current-db-size-bytes":3260416,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1437696,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-10-27T19:16:16.477207Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2822926116,"revision":1132,"compact-revision":-1}
	
	
	==> etcd [fa7d5afec83bcdb20193daf6bdf69ba110b12d907504246e3720279eeb65e09c] <==
	{"level":"warn","ts":"2025-10-27T19:05:33.624221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:05:33.643584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:05:33.672404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:05:33.694064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:05:33.708870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:05:33.723155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:05:33.780386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56082","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-27T19:05:57.695589Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-27T19:05:57.695640Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-647336","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-27T19:05:57.695725Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-27T19:05:57.846232Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-27T19:05:57.846315Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T19:05:57.846353Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-27T19:05:57.846436Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-27T19:05:57.846474Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-27T19:05:57.846467Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-27T19:05:57.846543Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-27T19:05:57.846580Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-27T19:05:57.846511Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-27T19:05:57.846661Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-27T19:05:57.846696Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T19:05:57.850472Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-27T19:05:57.850559Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T19:05:57.850593Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-27T19:05:57.850600Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-647336","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 19:17:29 up  2:00,  0 user,  load average: 1.12, 0.52, 0.98
	Linux functional-647336 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [97cf3b5f20ba087ef64e3a39b5599b6814986dbccb4a97200cce9b85cf6d0554] <==
	I1027 19:05:29.132393       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 19:05:29.132576       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1027 19:05:29.132703       1 main.go:148] setting mtu 1500 for CNI 
	I1027 19:05:29.132715       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 19:05:29.132725       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T19:05:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 19:05:29.317845       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 19:05:29.325658       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 19:05:29.325731       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 19:05:29.327310       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1027 19:05:34.827076       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 19:05:34.827106       1 metrics.go:72] Registering metrics
	I1027 19:05:34.827176       1 controller.go:711] "Syncing nftables rules"
	I1027 19:05:39.317898       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:05:39.317949       1 main.go:301] handling current node
	I1027 19:05:49.318283       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:05:49.318328       1 main.go:301] handling current node
	
	
	==> kindnet [ea240287195becf2cc1998e9784c463f3aecb7ccedd54f5adc25cb14f9a887ad] <==
	I1027 19:15:20.426004       1 main.go:301] handling current node
	I1027 19:15:30.423849       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:15:30.423883       1 main.go:301] handling current node
	I1027 19:15:40.427715       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:15:40.427848       1 main.go:301] handling current node
	I1027 19:15:50.424246       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:15:50.424282       1 main.go:301] handling current node
	I1027 19:16:00.423639       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:16:00.423687       1 main.go:301] handling current node
	I1027 19:16:10.425237       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:16:10.425353       1 main.go:301] handling current node
	I1027 19:16:20.428901       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:16:20.429010       1 main.go:301] handling current node
	I1027 19:16:30.423922       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:16:30.424041       1 main.go:301] handling current node
	I1027 19:16:40.424087       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:16:40.424133       1 main.go:301] handling current node
	I1027 19:16:50.427068       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:16:50.427099       1 main.go:301] handling current node
	I1027 19:17:00.423612       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:17:00.423656       1 main.go:301] handling current node
	I1027 19:17:10.424173       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:17:10.424201       1 main.go:301] handling current node
	I1027 19:17:20.423977       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:17:20.424007       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0ff2d0054728fa24bbc1c425f66dc287590a8d2a72f7a76287c21bf4fff1889b] <==
	I1027 19:06:19.263290       1 aggregator.go:171] initial CRD sync complete...
	I1027 19:06:19.263361       1 autoregister_controller.go:144] Starting autoregister controller
	I1027 19:06:19.263391       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 19:06:19.263421       1 cache.go:39] Caches are synced for autoregister controller
	I1027 19:06:19.263627       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 19:06:19.264473       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	E1027 19:06:19.304186       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1027 19:06:19.689517       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 19:06:19.812396       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 19:06:21.352087       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1027 19:06:21.507039       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 19:06:21.594641       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 19:06:21.610112       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 19:06:22.111939       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 19:06:22.168593       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 19:06:22.243265       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 19:06:36.382330       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.104.86.161"}
	I1027 19:06:46.030808       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.100.91.100"}
	I1027 19:06:49.750094       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.101.165.153"}
	E1027 19:07:18.279911       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1027 19:07:26.966054       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.109.20.196"}
	I1027 19:16:19.129684       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 19:17:07.233434       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 19:17:07.623989       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.196.212"}
	I1027 19:17:07.657783       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.157.101"}
	
	
	==> kube-controller-manager [87b4dbdd24a22e8fa05064807a683ea4b27cc570a386a15c672d90aefd9a753f] <==
	I1027 19:05:37.899431       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 19:05:37.899455       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1027 19:05:37.899480       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1027 19:05:37.899534       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1027 19:05:37.899683       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 19:05:37.899768       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 19:05:37.899815       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1027 19:05:37.900845       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1027 19:05:37.900901       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1027 19:05:37.904975       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 19:05:37.905607       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 19:05:37.908509       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 19:05:37.911699       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1027 19:05:37.911757       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1027 19:05:37.914904       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 19:05:37.914948       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1027 19:05:37.917266       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 19:05:37.917289       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1027 19:05:37.920466       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 19:05:37.920475       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 19:05:37.923737       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1027 19:05:37.925915       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1027 19:05:37.928019       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1027 19:05:37.931238       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 19:05:37.944054       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [88c93ecd26aac8cfd246fe6ac0b38d0fc80396145b8c09b8e26bb4e88bc922b8] <==
	I1027 19:06:22.046681       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 19:06:22.047334       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 19:06:22.048079       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 19:06:22.048094       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1027 19:06:22.048106       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 19:06:22.048147       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 19:06:22.049296       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 19:06:22.053375       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 19:06:22.060830       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1027 19:06:22.061069       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 19:06:22.061080       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 19:06:22.061085       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1027 19:06:22.061137       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1027 19:06:22.061181       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1027 19:06:22.063517       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1027 19:06:22.079530       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1027 19:17:07.381518       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 19:17:07.423007       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 19:17:07.423779       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 19:17:07.436037       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 19:17:07.437212       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 19:17:07.444498       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 19:17:07.458208       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 19:17:07.458382       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 19:17:07.467703       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [379b8fbd0bb7a27e68acc41f4fe9e134a0ee3c226b192b3c1b8aaa82e1a03a74] <==
	I1027 19:06:20.384135       1 server_linux.go:53] "Using iptables proxy"
	I1027 19:06:20.533284       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 19:06:20.633725       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 19:06:20.652758       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1027 19:06:20.652944       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 19:06:20.900178       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 19:06:20.900228       1 server_linux.go:132] "Using iptables Proxier"
	I1027 19:06:20.924340       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 19:06:20.924638       1 server.go:527] "Version info" version="v1.34.1"
	I1027 19:06:20.924654       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:06:20.925751       1 config.go:200] "Starting service config controller"
	I1027 19:06:20.925762       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 19:06:20.944223       1 config.go:106] "Starting endpoint slice config controller"
	I1027 19:06:20.944240       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 19:06:20.944286       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 19:06:20.944291       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 19:06:20.952035       1 config.go:309] "Starting node config controller"
	I1027 19:06:20.952053       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 19:06:20.952061       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 19:06:21.026663       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 19:06:21.044861       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 19:06:21.044907       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [b8ddcf50556db331d9a0dffa5c7716d56d20d4a3ac6bb7987586ecf24863b98e] <==
	I1027 19:05:29.269436       1 server_linux.go:53] "Using iptables proxy"
	I1027 19:05:30.444002       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 19:05:34.787127       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 19:05:34.799052       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1027 19:05:34.811181       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 19:05:34.926806       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 19:05:34.926864       1 server_linux.go:132] "Using iptables Proxier"
	I1027 19:05:34.931653       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 19:05:34.932122       1 server.go:527] "Version info" version="v1.34.1"
	I1027 19:05:34.932148       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:05:34.933264       1 config.go:200] "Starting service config controller"
	I1027 19:05:34.933276       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 19:05:34.945497       1 config.go:106] "Starting endpoint slice config controller"
	I1027 19:05:34.945519       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 19:05:34.945565       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 19:05:34.945570       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 19:05:34.946250       1 config.go:309] "Starting node config controller"
	I1027 19:05:34.946269       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 19:05:35.033393       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 19:05:35.046257       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 19:05:35.046365       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 19:05:35.046391       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4421a43a7a4ec77f8ed2b02c287ac7f2a28d3f7cbf084caa1a348dd14e326dd8] <==
	I1027 19:06:20.663944       1 serving.go:386] Generated self-signed cert in-memory
	I1027 19:06:23.103246       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 19:06:23.103279       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:06:23.108182       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1027 19:06:23.108271       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1027 19:06:23.108347       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:06:23.108390       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:06:23.108522       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 19:06:23.108560       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 19:06:23.109826       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 19:06:23.109900       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 19:06:23.209385       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:06:23.209509       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1027 19:06:23.209662       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kube-scheduler [55bf995090f8208c9834714cfffa8da5ed7fd789815fbc9e5ca8b4ba5c23c827] <==
	I1027 19:05:32.245754       1 serving.go:386] Generated self-signed cert in-memory
	W1027 19:05:34.374817       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1027 19:05:34.374926       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1027 19:05:34.374962       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1027 19:05:34.375021       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1027 19:05:34.612492       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 19:05:34.615109       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:05:34.617577       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 19:05:34.617662       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:05:34.617676       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:05:34.617692       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 19:05:34.818314       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:05:57.693580       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1027 19:05:57.693599       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1027 19:05:57.693619       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1027 19:05:57.693640       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:05:57.693818       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1027 19:05:57.693834       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 27 19:16:50 functional-647336 kubelet[3828]: E1027 19:16:50.657951    3828 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-7wbx7" podUID="74c3084d-11bb-4051-bf96-8a108f596965"
	Oct 27 19:16:52 functional-647336 kubelet[3828]: E1027 19:16:52.657966    3828 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-75nls" podUID="183cbeda-1c17-4b7c-b154-aa2441fb4ace"
	Oct 27 19:16:55 functional-647336 kubelet[3828]: I1027 19:16:55.757793    3828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/8d2bbd64-5624-4863-92c7-ff6da8442ce9-test-volume\") pod \"busybox-mount\" (UID: \"8d2bbd64-5624-4863-92c7-ff6da8442ce9\") " pod="default/busybox-mount"
	Oct 27 19:16:55 functional-647336 kubelet[3828]: I1027 19:16:55.757847    3828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb6vv\" (UniqueName: \"kubernetes.io/projected/8d2bbd64-5624-4863-92c7-ff6da8442ce9-kube-api-access-wb6vv\") pod \"busybox-mount\" (UID: \"8d2bbd64-5624-4863-92c7-ff6da8442ce9\") " pod="default/busybox-mount"
	Oct 27 19:16:55 functional-647336 kubelet[3828]: W1027 19:16:55.985076    3828 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9708eb6e1b266c76a526ae8131106ad5056113fb1d6a5005f0aa4ca8ce31979d/crio-ec3119611523be0c9d5e8b8587432d50eab47c4111cfa88cff39c680cb0cbd59 WatchSource:0}: Error finding container ec3119611523be0c9d5e8b8587432d50eab47c4111cfa88cff39c680cb0cbd59: Status 404 returned error can't find the container with id ec3119611523be0c9d5e8b8587432d50eab47c4111cfa88cff39c680cb0cbd59
	Oct 27 19:16:59 functional-647336 kubelet[3828]: I1027 19:16:59.683347    3828 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/8d2bbd64-5624-4863-92c7-ff6da8442ce9-test-volume\") pod \"8d2bbd64-5624-4863-92c7-ff6da8442ce9\" (UID: \"8d2bbd64-5624-4863-92c7-ff6da8442ce9\") "
	Oct 27 19:16:59 functional-647336 kubelet[3828]: I1027 19:16:59.683419    3828 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb6vv\" (UniqueName: \"kubernetes.io/projected/8d2bbd64-5624-4863-92c7-ff6da8442ce9-kube-api-access-wb6vv\") pod \"8d2bbd64-5624-4863-92c7-ff6da8442ce9\" (UID: \"8d2bbd64-5624-4863-92c7-ff6da8442ce9\") "
	Oct 27 19:16:59 functional-647336 kubelet[3828]: I1027 19:16:59.683777    3828 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d2bbd64-5624-4863-92c7-ff6da8442ce9-test-volume" (OuterVolumeSpecName: "test-volume") pod "8d2bbd64-5624-4863-92c7-ff6da8442ce9" (UID: "8d2bbd64-5624-4863-92c7-ff6da8442ce9"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 27 19:16:59 functional-647336 kubelet[3828]: I1027 19:16:59.688167    3828 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d2bbd64-5624-4863-92c7-ff6da8442ce9-kube-api-access-wb6vv" (OuterVolumeSpecName: "kube-api-access-wb6vv") pod "8d2bbd64-5624-4863-92c7-ff6da8442ce9" (UID: "8d2bbd64-5624-4863-92c7-ff6da8442ce9"). InnerVolumeSpecName "kube-api-access-wb6vv". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 27 19:16:59 functional-647336 kubelet[3828]: I1027 19:16:59.784295    3828 reconciler_common.go:299] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/8d2bbd64-5624-4863-92c7-ff6da8442ce9-test-volume\") on node \"functional-647336\" DevicePath \"\""
	Oct 27 19:16:59 functional-647336 kubelet[3828]: I1027 19:16:59.784509    3828 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wb6vv\" (UniqueName: \"kubernetes.io/projected/8d2bbd64-5624-4863-92c7-ff6da8442ce9-kube-api-access-wb6vv\") on node \"functional-647336\" DevicePath \"\""
	Oct 27 19:17:00 functional-647336 kubelet[3828]: I1027 19:17:00.454445    3828 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec3119611523be0c9d5e8b8587432d50eab47c4111cfa88cff39c680cb0cbd59"
	Oct 27 19:17:03 functional-647336 kubelet[3828]: E1027 19:17:03.658554    3828 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-7wbx7" podUID="74c3084d-11bb-4051-bf96-8a108f596965"
	Oct 27 19:17:06 functional-647336 kubelet[3828]: E1027 19:17:06.658000    3828 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-75nls" podUID="183cbeda-1c17-4b7c-b154-aa2441fb4ace"
	Oct 27 19:17:07 functional-647336 kubelet[3828]: I1027 19:17:07.653218    3828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c1fa74a5-80b1-4c74-9c92-2c3b4d843c9e-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-5fz9q\" (UID: \"c1fa74a5-80b1-4c74-9c92-2c3b4d843c9e\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5fz9q"
	Oct 27 19:17:07 functional-647336 kubelet[3828]: I1027 19:17:07.653280    3828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvr25\" (UniqueName: \"kubernetes.io/projected/c1fa74a5-80b1-4c74-9c92-2c3b4d843c9e-kube-api-access-pvr25\") pod \"kubernetes-dashboard-855c9754f9-5fz9q\" (UID: \"c1fa74a5-80b1-4c74-9c92-2c3b4d843c9e\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5fz9q"
	Oct 27 19:17:07 functional-647336 kubelet[3828]: I1027 19:17:07.653305    3828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8cfr\" (UniqueName: \"kubernetes.io/projected/f53fbf8b-6777-4fbe-8d25-8fc76e0ab410-kube-api-access-n8cfr\") pod \"dashboard-metrics-scraper-77bf4d6c4c-bhq9d\" (UID: \"f53fbf8b-6777-4fbe-8d25-8fc76e0ab410\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-bhq9d"
	Oct 27 19:17:07 functional-647336 kubelet[3828]: I1027 19:17:07.653326    3828 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f53fbf8b-6777-4fbe-8d25-8fc76e0ab410-tmp-volume\") pod \"dashboard-metrics-scraper-77bf4d6c4c-bhq9d\" (UID: \"f53fbf8b-6777-4fbe-8d25-8fc76e0ab410\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-bhq9d"
	Oct 27 19:17:07 functional-647336 kubelet[3828]: W1027 19:17:07.914855    3828 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9708eb6e1b266c76a526ae8131106ad5056113fb1d6a5005f0aa4ca8ce31979d/crio-bb23cb0d062c0662e2ee8b66ec99baf53612ec7d8514f02925e11742f0eb3f17 WatchSource:0}: Error finding container bb23cb0d062c0662e2ee8b66ec99baf53612ec7d8514f02925e11742f0eb3f17: Status 404 returned error can't find the container with id bb23cb0d062c0662e2ee8b66ec99baf53612ec7d8514f02925e11742f0eb3f17
	Oct 27 19:17:12 functional-647336 kubelet[3828]: I1027 19:17:12.525633    3828 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5fz9q" podStartSLOduration=1.257829831 podStartE2EDuration="5.525611701s" podCreationTimestamp="2025-10-27 19:17:07 +0000 UTC" firstStartedPulling="2025-10-27 19:17:07.862074926 +0000 UTC m=+653.367029266" lastFinishedPulling="2025-10-27 19:17:12.129856591 +0000 UTC m=+657.634811136" observedRunningTime="2025-10-27 19:17:12.524275986 +0000 UTC m=+658.029230449" watchObservedRunningTime="2025-10-27 19:17:12.525611701 +0000 UTC m=+658.030566041"
	Oct 27 19:17:17 functional-647336 kubelet[3828]: E1027 19:17:17.659185    3828 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" image="kicbase/echo-server:latest"
	Oct 27 19:17:17 functional-647336 kubelet[3828]: E1027 19:17:17.659235    3828 kuberuntime_image.go:43] "Failed to pull image" err="short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" image="kicbase/echo-server:latest"
	Oct 27 19:17:17 functional-647336 kubelet[3828]: E1027 19:17:17.659302    3828 kuberuntime_manager.go:1449] "Unhandled Error" err="container echo-server start failed in pod hello-node-75c85bcc94-7wbx7_default(74c3084d-11bb-4051-bf96-8a108f596965): ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list" logger="UnhandledError"
	Oct 27 19:17:17 functional-647336 kubelet[3828]: E1027 19:17:17.659331    3828 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ErrImagePull: \"short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-7wbx7" podUID="74c3084d-11bb-4051-bf96-8a108f596965"
	Oct 27 19:17:18 functional-647336 kubelet[3828]: E1027 19:17:18.657954    3828 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-75nls" podUID="183cbeda-1c17-4b7c-b154-aa2441fb4ace"
	
	
	==> kubernetes-dashboard [1be10bf92181ebd6f981f477b9d7b9d88a9840ba8df93e69ef68e5dbf42bccad] <==
	2025/10/27 19:17:12 Using namespace: kubernetes-dashboard
	2025/10/27 19:17:12 Using in-cluster config to connect to apiserver
	2025/10/27 19:17:12 Using secret token for csrf signing
	2025/10/27 19:17:12 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/27 19:17:12 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/27 19:17:12 Successful initial request to the apiserver, version: v1.34.1
	2025/10/27 19:17:12 Generating JWE encryption key
	2025/10/27 19:17:12 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/27 19:17:12 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/27 19:17:12 Initializing JWE encryption key from synchronized object
	2025/10/27 19:17:12 Creating in-cluster Sidecar client
	2025/10/27 19:17:12 Serving insecurely on HTTP port: 9090
	2025/10/27 19:17:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 19:17:12 Starting overwatch
	
	
	==> storage-provisioner [70e036cf2b981116044b6d44a1df74a5705d47f9601c08075ec3513956bdbb75] <==
	I1027 19:05:42.446442       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1027 19:05:42.459087       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 19:05:42.459141       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1027 19:05:42.461394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:05:45.916813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:05:50.177764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:05:53.781557       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:05:56.836392       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [fefcbf4c9e6aa91ea688ab48cb2bba269fd3e33ff5971a951c07c3195f6483a3] <==
	W1027 19:17:04.748004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:17:06.751636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:17:06.757667       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:17:08.760946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:17:08.768387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:17:10.771747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:17:10.776514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:17:12.779637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:17:12.783912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:17:14.787775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:17:14.792607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:17:16.795251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:17:16.799153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:17:18.802063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:17:18.810450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:17:20.813449       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:17:20.817577       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:17:22.821360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:17:22.827830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:17:24.831500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:17:24.835899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:17:26.838620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:17:26.843351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:17:28.846075       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:17:28.851089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-647336 -n functional-647336
helpers_test.go:269: (dbg) Run:  kubectl --context functional-647336 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-7wbx7 hello-node-connect-7d85dfc575-75nls
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-647336 describe pod busybox-mount hello-node-75c85bcc94-7wbx7 hello-node-connect-7d85dfc575-75nls
helpers_test.go:290: (dbg) kubectl --context functional-647336 describe pod busybox-mount hello-node-75c85bcc94-7wbx7 hello-node-connect-7d85dfc575-75nls:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-647336/192.168.49.2
	Start Time:       Mon, 27 Oct 2025 19:16:55 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://3241d55f30d376c8ace21d6c9e0fe0d72a12ae882b44da53512ee85bcb3fc201
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 27 Oct 2025 19:16:58 +0000
	      Finished:     Mon, 27 Oct 2025 19:16:58 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wb6vv (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-wb6vv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  34s   default-scheduler  Successfully assigned default/busybox-mount to functional-647336
	  Normal  Pulling    35s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     32s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.036s (2.037s including waiting). Image size: 3774172 bytes.
	  Normal  Created    32s   kubelet            Created container: mount-munger
	  Normal  Started    32s   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-7wbx7
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-647336/192.168.49.2
	Start Time:       Mon, 27 Oct 2025 19:06:45 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z65ff (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-z65ff:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                 From               Message
	  ----     ------     ----                ----               -------
	  Normal   Scheduled  10m                 default-scheduler  Successfully assigned default/hello-node-75c85bcc94-7wbx7 to functional-647336
	  Normal   Pulling    8m1s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     8m1s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     8m1s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    40s (x42 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     40s (x42 over 10m)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-75nls
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-647336/192.168.49.2
	Start Time:       Mon, 27 Oct 2025 19:07:26 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-blqtn (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-blqtn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-75nls to functional-647336
	  Normal   Pulling    7m15s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m15s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m15s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m54s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m40s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 image load --daemon kicbase/echo-server:functional-647336 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-647336" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 image load --daemon kicbase/echo-server:functional-647336 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-647336" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-647336
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 image load --daemon kicbase/echo-server:functional-647336 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-647336" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-647336 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-647336 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-7wbx7" [74c3084d-11bb-4051-bf96-8a108f596965] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-647336 -n functional-647336
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-27 19:16:46.384527965 +0000 UTC m=+1208.534940239
functional_test.go:1460: (dbg) Run:  kubectl --context functional-647336 describe po hello-node-75c85bcc94-7wbx7 -n default
functional_test.go:1460: (dbg) kubectl --context functional-647336 describe po hello-node-75c85bcc94-7wbx7 -n default:
Name:             hello-node-75c85bcc94-7wbx7
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-647336/192.168.49.2
Start Time:       Mon, 27 Oct 2025 19:06:45 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z65ff (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-z65ff:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-7wbx7 to functional-647336
Normal   Pulling    7m17s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m17s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m17s (x5 over 10m)   kubelet            Error: ErrImagePull
Normal   BackOff    4m49s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m49s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-647336 logs hello-node-75c85bcc94-7wbx7 -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-647336 logs hello-node-75c85bcc94-7wbx7 -n default: exit status 1 (101.568268ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-7wbx7" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-647336 logs hello-node-75c85bcc94-7wbx7 -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 image save kicbase/echo-server:functional-647336 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1027 19:06:47.815931  291273 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:06:47.817483  291273 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:06:47.817534  291273 out.go:374] Setting ErrFile to fd 2...
	I1027 19:06:47.817594  291273 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:06:47.818011  291273 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 19:06:47.819447  291273 config.go:182] Loaded profile config "functional-647336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:06:47.819666  291273 config.go:182] Loaded profile config "functional-647336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:06:47.821620  291273 cli_runner.go:164] Run: docker container inspect functional-647336 --format={{.State.Status}}
	I1027 19:06:47.841346  291273 ssh_runner.go:195] Run: systemctl --version
	I1027 19:06:47.841419  291273 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-647336
	I1027 19:06:47.864028  291273 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/functional-647336/id_rsa Username:docker}
	I1027 19:06:47.977305  291273 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1027 19:06:47.977365  291273 cache_images.go:254] Failed to load cached images for "functional-647336": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1027 19:06:47.977382  291273 cache_images.go:266] failed pushing to: functional-647336

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-647336
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 image save --daemon kicbase/echo-server:functional-647336 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-647336
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-647336: exit status 1 (19.096112ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-647336

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-647336

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-647336 service --namespace=default --https --url hello-node: exit status 115 (384.837989ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:32086
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-647336 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-647336 service hello-node --url --format={{.IP}}: exit status 115 (391.045415ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-647336 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-647336 service hello-node --url: exit status 115 (387.512596ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:32086
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-647336 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32086
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.48s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-368309 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-368309 --output=json --user=testUser: exit status 80 (2.483744186s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4a7938c0-0ee8-477b-8fc8-ee67c8a22936","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-368309 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"f8e7fbe8-1631-4f4c-a434-e97779392876","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-27T19:30:31Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"afb97bea-d2ed-4209-a606-1fee5e183b65","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-368309 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.48s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.88s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-368309 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-368309 --output=json --user=testUser: exit status 80 (1.880895075s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a2934564-1034-40ad-b346-9167c90c61c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-368309 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"bcf48678-12d7-42d9-bdcf-134c8c4de7f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-27T19:30:33Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"a927a857-5398-4e87-a883-f8a324c11fcb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-368309 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.88s)

                                                
                                    
x
+
TestPause/serial/Pause (6.49s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-470021 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-470021 --alsologtostderr -v=5: exit status 80 (1.812459292s)

                                                
                                                
-- stdout --
	* Pausing node pause-470021 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:53:21.507593  429941 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:53:21.508472  429941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:53:21.508521  429941 out.go:374] Setting ErrFile to fd 2...
	I1027 19:53:21.508544  429941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:53:21.508854  429941 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 19:53:21.509171  429941 out.go:368] Setting JSON to false
	I1027 19:53:21.509228  429941 mustload.go:65] Loading cluster: pause-470021
	I1027 19:53:21.509710  429941 config.go:182] Loaded profile config "pause-470021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:53:21.510242  429941 cli_runner.go:164] Run: docker container inspect pause-470021 --format={{.State.Status}}
	I1027 19:53:21.527615  429941 host.go:66] Checking if "pause-470021" exists ...
	I1027 19:53:21.527931  429941 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:53:21.622673  429941 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-27 19:53:21.611652475 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 19:53:21.623453  429941 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-470021 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1027 19:53:21.626688  429941 out.go:179] * Pausing node pause-470021 ... 
	I1027 19:53:21.630395  429941 host.go:66] Checking if "pause-470021" exists ...
	I1027 19:53:21.630743  429941 ssh_runner.go:195] Run: systemctl --version
	I1027 19:53:21.630795  429941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-470021
	I1027 19:53:21.648842  429941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/pause-470021/id_rsa Username:docker}
	I1027 19:53:21.762486  429941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:53:21.776838  429941 pause.go:52] kubelet running: true
	I1027 19:53:21.776926  429941 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 19:53:22.074455  429941 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 19:53:22.074549  429941 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 19:53:22.142944  429941 cri.go:89] found id: "edb8ab36eea95aa2a08a1d4c6d4048ccc588d84a29202156e960705fc0fa969e"
	I1027 19:53:22.142965  429941 cri.go:89] found id: "bb5755505420e2c40bd2f92567f06c5b2bff9cc40d5286bd60d82a6f776ca771"
	I1027 19:53:22.142970  429941 cri.go:89] found id: "c384025bdb6e4e5fd0f211129c90fa79ae691468550d2be99ba79f5cff89e73e"
	I1027 19:53:22.142974  429941 cri.go:89] found id: "d58e48bacf1495d3c508b1e7222843a56c1e79dfedb59aeb62a1c6aad514d698"
	I1027 19:53:22.142977  429941 cri.go:89] found id: "78decaae3b0e60034f2398cb0b994dce873210be4f725d847ff7529ca517374f"
	I1027 19:53:22.143009  429941 cri.go:89] found id: "669fb029456a2c87088307a9f1f18b0a214174bd7c9bb178f24f3ca5a7fc6e1b"
	I1027 19:53:22.143014  429941 cri.go:89] found id: "336f9e0fad2e20cdf64c964f42b31d59f4633709021335d2e471a64c05518236"
	I1027 19:53:22.143017  429941 cri.go:89] found id: "7eb02b6b3e6acbfaa83db3a48ac65c331d99ca20916430c27ada8e7bfb90bc22"
	I1027 19:53:22.143020  429941 cri.go:89] found id: "69a0468ab45ef90113638021bffb192716f99ab5157428ccbd881c54365dd32d"
	I1027 19:53:22.143025  429941 cri.go:89] found id: "87b11ab122b25bab328b375d9d04001497127ff2585367e02e286374156d6569"
	I1027 19:53:22.143028  429941 cri.go:89] found id: "bdd6aaebbf9b1f7984851798a00219d0ab9df585fc4aa577a1bf0220aa1fd7fc"
	I1027 19:53:22.143032  429941 cri.go:89] found id: "5bab714cb0c4efef9b8229cf91fc7171e0d11a7652571bf103319a343da548c2"
	I1027 19:53:22.143035  429941 cri.go:89] found id: "eb0212fd806c68f3148012bf7e975926ecbbcc8417725249917368a738dcb11d"
	I1027 19:53:22.143038  429941 cri.go:89] found id: "9c65fbe4eff1527a24e959481045341035c8c2ea0c34900ec330446573f13baf"
	I1027 19:53:22.143041  429941 cri.go:89] found id: ""
	I1027 19:53:22.143087  429941 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 19:53:22.154300  429941 retry.go:31] will retry after 351.274226ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:53:22Z" level=error msg="open /run/runc: no such file or directory"
	I1027 19:53:22.505837  429941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:53:22.518892  429941 pause.go:52] kubelet running: false
	I1027 19:53:22.519013  429941 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 19:53:22.671807  429941 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 19:53:22.671985  429941 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 19:53:22.736508  429941 cri.go:89] found id: "edb8ab36eea95aa2a08a1d4c6d4048ccc588d84a29202156e960705fc0fa969e"
	I1027 19:53:22.736534  429941 cri.go:89] found id: "bb5755505420e2c40bd2f92567f06c5b2bff9cc40d5286bd60d82a6f776ca771"
	I1027 19:53:22.736539  429941 cri.go:89] found id: "c384025bdb6e4e5fd0f211129c90fa79ae691468550d2be99ba79f5cff89e73e"
	I1027 19:53:22.736542  429941 cri.go:89] found id: "d58e48bacf1495d3c508b1e7222843a56c1e79dfedb59aeb62a1c6aad514d698"
	I1027 19:53:22.736546  429941 cri.go:89] found id: "78decaae3b0e60034f2398cb0b994dce873210be4f725d847ff7529ca517374f"
	I1027 19:53:22.736549  429941 cri.go:89] found id: "669fb029456a2c87088307a9f1f18b0a214174bd7c9bb178f24f3ca5a7fc6e1b"
	I1027 19:53:22.736552  429941 cri.go:89] found id: "336f9e0fad2e20cdf64c964f42b31d59f4633709021335d2e471a64c05518236"
	I1027 19:53:22.736555  429941 cri.go:89] found id: "7eb02b6b3e6acbfaa83db3a48ac65c331d99ca20916430c27ada8e7bfb90bc22"
	I1027 19:53:22.736558  429941 cri.go:89] found id: "69a0468ab45ef90113638021bffb192716f99ab5157428ccbd881c54365dd32d"
	I1027 19:53:22.736564  429941 cri.go:89] found id: "87b11ab122b25bab328b375d9d04001497127ff2585367e02e286374156d6569"
	I1027 19:53:22.736586  429941 cri.go:89] found id: "bdd6aaebbf9b1f7984851798a00219d0ab9df585fc4aa577a1bf0220aa1fd7fc"
	I1027 19:53:22.736594  429941 cri.go:89] found id: "5bab714cb0c4efef9b8229cf91fc7171e0d11a7652571bf103319a343da548c2"
	I1027 19:53:22.736597  429941 cri.go:89] found id: "eb0212fd806c68f3148012bf7e975926ecbbcc8417725249917368a738dcb11d"
	I1027 19:53:22.736600  429941 cri.go:89] found id: "9c65fbe4eff1527a24e959481045341035c8c2ea0c34900ec330446573f13baf"
	I1027 19:53:22.736603  429941 cri.go:89] found id: ""
	I1027 19:53:22.736657  429941 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 19:53:22.747599  429941 retry.go:31] will retry after 258.422596ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:53:22Z" level=error msg="open /run/runc: no such file or directory"
	I1027 19:53:23.007109  429941 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:53:23.020656  429941 pause.go:52] kubelet running: false
	I1027 19:53:23.020717  429941 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 19:53:23.156736  429941 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 19:53:23.156813  429941 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 19:53:23.224465  429941 cri.go:89] found id: "edb8ab36eea95aa2a08a1d4c6d4048ccc588d84a29202156e960705fc0fa969e"
	I1027 19:53:23.224540  429941 cri.go:89] found id: "bb5755505420e2c40bd2f92567f06c5b2bff9cc40d5286bd60d82a6f776ca771"
	I1027 19:53:23.224560  429941 cri.go:89] found id: "c384025bdb6e4e5fd0f211129c90fa79ae691468550d2be99ba79f5cff89e73e"
	I1027 19:53:23.224581  429941 cri.go:89] found id: "d58e48bacf1495d3c508b1e7222843a56c1e79dfedb59aeb62a1c6aad514d698"
	I1027 19:53:23.224615  429941 cri.go:89] found id: "78decaae3b0e60034f2398cb0b994dce873210be4f725d847ff7529ca517374f"
	I1027 19:53:23.224646  429941 cri.go:89] found id: "669fb029456a2c87088307a9f1f18b0a214174bd7c9bb178f24f3ca5a7fc6e1b"
	I1027 19:53:23.224656  429941 cri.go:89] found id: "336f9e0fad2e20cdf64c964f42b31d59f4633709021335d2e471a64c05518236"
	I1027 19:53:23.224660  429941 cri.go:89] found id: "7eb02b6b3e6acbfaa83db3a48ac65c331d99ca20916430c27ada8e7bfb90bc22"
	I1027 19:53:23.224663  429941 cri.go:89] found id: "69a0468ab45ef90113638021bffb192716f99ab5157428ccbd881c54365dd32d"
	I1027 19:53:23.224670  429941 cri.go:89] found id: "87b11ab122b25bab328b375d9d04001497127ff2585367e02e286374156d6569"
	I1027 19:53:23.224673  429941 cri.go:89] found id: "bdd6aaebbf9b1f7984851798a00219d0ab9df585fc4aa577a1bf0220aa1fd7fc"
	I1027 19:53:23.224676  429941 cri.go:89] found id: "5bab714cb0c4efef9b8229cf91fc7171e0d11a7652571bf103319a343da548c2"
	I1027 19:53:23.224680  429941 cri.go:89] found id: "eb0212fd806c68f3148012bf7e975926ecbbcc8417725249917368a738dcb11d"
	I1027 19:53:23.224686  429941 cri.go:89] found id: "9c65fbe4eff1527a24e959481045341035c8c2ea0c34900ec330446573f13baf"
	I1027 19:53:23.224693  429941 cri.go:89] found id: ""
	I1027 19:53:23.224741  429941 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 19:53:23.239465  429941 out.go:203] 
	W1027 19:53:23.242361  429941 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:53:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:53:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 19:53:23.242384  429941 out.go:285] * 
	* 
	W1027 19:53:23.249184  429941 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 19:53:23.252273  429941 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-470021 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-470021
helpers_test.go:243: (dbg) docker inspect pause-470021:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "41e2ae07e79ce08dd76e9888603203ecd72a731510cd788d005065241ef8eb29",
	        "Created": "2025-10-27T19:51:33.257572081Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 423689,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T19:51:33.296886813Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/41e2ae07e79ce08dd76e9888603203ecd72a731510cd788d005065241ef8eb29/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/41e2ae07e79ce08dd76e9888603203ecd72a731510cd788d005065241ef8eb29/hostname",
	        "HostsPath": "/var/lib/docker/containers/41e2ae07e79ce08dd76e9888603203ecd72a731510cd788d005065241ef8eb29/hosts",
	        "LogPath": "/var/lib/docker/containers/41e2ae07e79ce08dd76e9888603203ecd72a731510cd788d005065241ef8eb29/41e2ae07e79ce08dd76e9888603203ecd72a731510cd788d005065241ef8eb29-json.log",
	        "Name": "/pause-470021",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-470021:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-470021",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "41e2ae07e79ce08dd76e9888603203ecd72a731510cd788d005065241ef8eb29",
	                "LowerDir": "/var/lib/docker/overlay2/0b21857dff85b810b4acf41af2d3a8653b571714cae7063f820663947d90dc11-init/diff:/var/lib/docker/overlay2/0567218bbe05e8b59b2aef7ad82032d6716040c1f9d9d73e7de8682101079f76/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0b21857dff85b810b4acf41af2d3a8653b571714cae7063f820663947d90dc11/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0b21857dff85b810b4acf41af2d3a8653b571714cae7063f820663947d90dc11/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0b21857dff85b810b4acf41af2d3a8653b571714cae7063f820663947d90dc11/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-470021",
	                "Source": "/var/lib/docker/volumes/pause-470021/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-470021",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-470021",
	                "name.minikube.sigs.k8s.io": "pause-470021",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "97b09cda74410479662655b29934acf254405d431f0b7a9555a703e4958cb74b",
	            "SandboxKey": "/var/run/docker/netns/97b09cda7441",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33383"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33384"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33387"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33385"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33386"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-470021": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:03:10:b4:60:91",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "14b16c2a2409a701f6d5ee6b9bdae0a104e3770e998b35acac7e64929d3d8416",
	                    "EndpointID": "6292a631a9b4a57367c7530fcf80ded420bcbc2ed0b724f8eb1eba2fe9e60023",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-470021",
	                        "41e2ae07e79c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-470021 -n pause-470021
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-470021 -n pause-470021: exit status 2 (331.854271ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-470021 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-470021 logs -n 25: (1.438738268s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-358331 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-358331       │ jenkins │ v1.37.0 │ 27 Oct 25 19:47 UTC │ 27 Oct 25 19:48 UTC │
	│ start   │ -p missing-upgrade-033557 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-033557    │ jenkins │ v1.32.0 │ 27 Oct 25 19:47 UTC │ 27 Oct 25 19:48 UTC │
	│ start   │ -p NoKubernetes-358331 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-358331       │ jenkins │ v1.37.0 │ 27 Oct 25 19:48 UTC │ 27 Oct 25 19:48 UTC │
	│ delete  │ -p NoKubernetes-358331                                                                                                                   │ NoKubernetes-358331       │ jenkins │ v1.37.0 │ 27 Oct 25 19:48 UTC │ 27 Oct 25 19:48 UTC │
	│ start   │ -p NoKubernetes-358331 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-358331       │ jenkins │ v1.37.0 │ 27 Oct 25 19:48 UTC │ 27 Oct 25 19:48 UTC │
	│ start   │ -p missing-upgrade-033557 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-033557    │ jenkins │ v1.37.0 │ 27 Oct 25 19:48 UTC │ 27 Oct 25 19:49 UTC │
	│ ssh     │ -p NoKubernetes-358331 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-358331       │ jenkins │ v1.37.0 │ 27 Oct 25 19:48 UTC │                     │
	│ stop    │ -p NoKubernetes-358331                                                                                                                   │ NoKubernetes-358331       │ jenkins │ v1.37.0 │ 27 Oct 25 19:48 UTC │ 27 Oct 25 19:48 UTC │
	│ start   │ -p NoKubernetes-358331 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-358331       │ jenkins │ v1.37.0 │ 27 Oct 25 19:48 UTC │ 27 Oct 25 19:48 UTC │
	│ ssh     │ -p NoKubernetes-358331 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-358331       │ jenkins │ v1.37.0 │ 27 Oct 25 19:48 UTC │                     │
	│ delete  │ -p NoKubernetes-358331                                                                                                                   │ NoKubernetes-358331       │ jenkins │ v1.37.0 │ 27 Oct 25 19:48 UTC │ 27 Oct 25 19:48 UTC │
	│ start   │ -p kubernetes-upgrade-524430 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-524430 │ jenkins │ v1.37.0 │ 27 Oct 25 19:48 UTC │ 27 Oct 25 19:49 UTC │
	│ delete  │ -p missing-upgrade-033557                                                                                                                │ missing-upgrade-033557    │ jenkins │ v1.37.0 │ 27 Oct 25 19:49 UTC │ 27 Oct 25 19:49 UTC │
	│ stop    │ -p kubernetes-upgrade-524430                                                                                                             │ kubernetes-upgrade-524430 │ jenkins │ v1.37.0 │ 27 Oct 25 19:49 UTC │ 27 Oct 25 19:49 UTC │
	│ start   │ -p stopped-upgrade-296733 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-296733    │ jenkins │ v1.32.0 │ 27 Oct 25 19:49 UTC │ 27 Oct 25 19:50 UTC │
	│ start   │ -p kubernetes-upgrade-524430 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-524430 │ jenkins │ v1.37.0 │ 27 Oct 25 19:49 UTC │                     │
	│ stop    │ stopped-upgrade-296733 stop                                                                                                              │ stopped-upgrade-296733    │ jenkins │ v1.32.0 │ 27 Oct 25 19:50 UTC │ 27 Oct 25 19:50 UTC │
	│ start   │ -p stopped-upgrade-296733 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-296733    │ jenkins │ v1.37.0 │ 27 Oct 25 19:50 UTC │ 27 Oct 25 19:50 UTC │
	│ delete  │ -p stopped-upgrade-296733                                                                                                                │ stopped-upgrade-296733    │ jenkins │ v1.37.0 │ 27 Oct 25 19:50 UTC │ 27 Oct 25 19:50 UTC │
	│ start   │ -p running-upgrade-048851 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-048851    │ jenkins │ v1.32.0 │ 27 Oct 25 19:50 UTC │ 27 Oct 25 19:51 UTC │
	│ start   │ -p running-upgrade-048851 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-048851    │ jenkins │ v1.37.0 │ 27 Oct 25 19:51 UTC │ 27 Oct 25 19:51 UTC │
	│ delete  │ -p running-upgrade-048851                                                                                                                │ running-upgrade-048851    │ jenkins │ v1.37.0 │ 27 Oct 25 19:51 UTC │ 27 Oct 25 19:51 UTC │
	│ start   │ -p pause-470021 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-470021              │ jenkins │ v1.37.0 │ 27 Oct 25 19:51 UTC │ 27 Oct 25 19:52 UTC │
	│ start   │ -p pause-470021 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-470021              │ jenkins │ v1.37.0 │ 27 Oct 25 19:52 UTC │ 27 Oct 25 19:53 UTC │
	│ pause   │ -p pause-470021 --alsologtostderr -v=5                                                                                                   │ pause-470021              │ jenkins │ v1.37.0 │ 27 Oct 25 19:53 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 19:52:52
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 19:52:52.649414  428024 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:52:52.649533  428024 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:52:52.649545  428024 out.go:374] Setting ErrFile to fd 2...
	I1027 19:52:52.649549  428024 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:52:52.649811  428024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 19:52:52.650165  428024 out.go:368] Setting JSON to false
	I1027 19:52:52.651168  428024 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9325,"bootTime":1761585448,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1027 19:52:52.651239  428024 start.go:141] virtualization:  
	I1027 19:52:52.654567  428024 out.go:179] * [pause-470021] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 19:52:52.658459  428024 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 19:52:52.658529  428024 notify.go:220] Checking for updates...
	I1027 19:52:52.664856  428024 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 19:52:52.667880  428024 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 19:52:52.670760  428024 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-266035/.minikube
	I1027 19:52:52.673712  428024 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 19:52:52.676665  428024 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 19:52:50.079112  412559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:52:50.090711  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:52:50.090796  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:52:50.119667  412559 cri.go:89] found id: ""
	I1027 19:52:50.119693  412559 logs.go:282] 0 containers: []
	W1027 19:52:50.119702  412559 logs.go:284] No container was found matching "kube-apiserver"
	I1027 19:52:50.119709  412559 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:52:50.119777  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:52:50.151344  412559 cri.go:89] found id: ""
	I1027 19:52:50.151380  412559 logs.go:282] 0 containers: []
	W1027 19:52:50.151389  412559 logs.go:284] No container was found matching "etcd"
	I1027 19:52:50.151396  412559 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:52:50.151461  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:52:50.188985  412559 cri.go:89] found id: ""
	I1027 19:52:50.189012  412559 logs.go:282] 0 containers: []
	W1027 19:52:50.189021  412559 logs.go:284] No container was found matching "coredns"
	I1027 19:52:50.189027  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:52:50.189094  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:52:50.223192  412559 cri.go:89] found id: ""
	I1027 19:52:50.223219  412559 logs.go:282] 0 containers: []
	W1027 19:52:50.223229  412559 logs.go:284] No container was found matching "kube-scheduler"
	I1027 19:52:50.223235  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:52:50.223297  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:52:50.250033  412559 cri.go:89] found id: ""
	I1027 19:52:50.250058  412559 logs.go:282] 0 containers: []
	W1027 19:52:50.250066  412559 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:52:50.250073  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:52:50.250132  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:52:50.276685  412559 cri.go:89] found id: ""
	I1027 19:52:50.276712  412559 logs.go:282] 0 containers: []
	W1027 19:52:50.276721  412559 logs.go:284] No container was found matching "kube-controller-manager"
	I1027 19:52:50.276728  412559 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:52:50.276808  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:52:50.316355  412559 cri.go:89] found id: ""
	I1027 19:52:50.316379  412559 logs.go:282] 0 containers: []
	W1027 19:52:50.316388  412559 logs.go:284] No container was found matching "kindnet"
	I1027 19:52:50.316397  412559 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:52:50.316478  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:52:50.341598  412559 cri.go:89] found id: ""
	I1027 19:52:50.341623  412559 logs.go:282] 0 containers: []
	W1027 19:52:50.341631  412559 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:52:50.341640  412559 logs.go:123] Gathering logs for container status ...
	I1027 19:52:50.341669  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:52:50.372324  412559 logs.go:123] Gathering logs for kubelet ...
	I1027 19:52:50.372396  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:52:50.505519  412559 logs.go:123] Gathering logs for dmesg ...
	I1027 19:52:50.505632  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:52:50.528408  412559 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:52:50.528484  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:52:50.637992  412559 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:52:50.638052  412559 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:52:50.638088  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:52:52.680117  428024 config.go:182] Loaded profile config "pause-470021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:52:52.680737  428024 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 19:52:52.703196  428024 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 19:52:52.703321  428024 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:52:52.772379  428024 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-27 19:52:52.762397353 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 19:52:52.772478  428024 docker.go:318] overlay module found
	I1027 19:52:52.775385  428024 out.go:179] * Using the docker driver based on existing profile
	I1027 19:52:52.778733  428024 start.go:305] selected driver: docker
	I1027 19:52:52.778749  428024 start.go:925] validating driver "docker" against &{Name:pause-470021 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-470021 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:52:52.778888  428024 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 19:52:52.779026  428024 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:52:52.846606  428024 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-27 19:52:52.837569074 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 19:52:52.847023  428024 cni.go:84] Creating CNI manager for ""
	I1027 19:52:52.847094  428024 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:52:52.847192  428024 start.go:349] cluster config:
	{Name:pause-470021 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-470021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:52:52.852312  428024 out.go:179] * Starting "pause-470021" primary control-plane node in "pause-470021" cluster
	I1027 19:52:52.855275  428024 cache.go:123] Beginning downloading kic base image for docker with crio
	I1027 19:52:52.858135  428024 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 19:52:52.860959  428024 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:52:52.861010  428024 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1027 19:52:52.861025  428024 cache.go:58] Caching tarball of preloaded images
	I1027 19:52:52.861049  428024 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 19:52:52.861110  428024 preload.go:233] Found /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1027 19:52:52.861119  428024 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 19:52:52.861260  428024 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/pause-470021/config.json ...
	I1027 19:52:52.883733  428024 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 19:52:52.883756  428024 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 19:52:52.883774  428024 cache.go:232] Successfully downloaded all kic artifacts
	I1027 19:52:52.883795  428024 start.go:360] acquireMachinesLock for pause-470021: {Name:mkafa68747e6c89df1b06354106458771898fc4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:52:52.883859  428024 start.go:364] duration metric: took 42.083µs to acquireMachinesLock for "pause-470021"
	I1027 19:52:52.883884  428024 start.go:96] Skipping create...Using existing machine configuration
	I1027 19:52:52.883889  428024 fix.go:54] fixHost starting: 
	I1027 19:52:52.884157  428024 cli_runner.go:164] Run: docker container inspect pause-470021 --format={{.State.Status}}
	I1027 19:52:52.900377  428024 fix.go:112] recreateIfNeeded on pause-470021: state=Running err=<nil>
	W1027 19:52:52.900416  428024 fix.go:138] unexpected machine state, will restart: <nil>
	I1027 19:52:52.905475  428024 out.go:252] * Updating the running docker "pause-470021" container ...
	I1027 19:52:52.905515  428024 machine.go:93] provisionDockerMachine start ...
	I1027 19:52:52.905614  428024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-470021
	I1027 19:52:52.922622  428024 main.go:141] libmachine: Using SSH client type: native
	I1027 19:52:52.922936  428024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33383 <nil> <nil>}
	I1027 19:52:52.922951  428024 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 19:52:53.074844  428024 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-470021
	
	I1027 19:52:53.074872  428024 ubuntu.go:182] provisioning hostname "pause-470021"
	I1027 19:52:53.074939  428024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-470021
	I1027 19:52:53.095153  428024 main.go:141] libmachine: Using SSH client type: native
	I1027 19:52:53.095459  428024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33383 <nil> <nil>}
	I1027 19:52:53.095470  428024 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-470021 && echo "pause-470021" | sudo tee /etc/hostname
	I1027 19:52:53.254578  428024 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-470021
	
	I1027 19:52:53.254650  428024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-470021
	I1027 19:52:53.280702  428024 main.go:141] libmachine: Using SSH client type: native
	I1027 19:52:53.281027  428024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33383 <nil> <nil>}
	I1027 19:52:53.281048  428024 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-470021' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-470021/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-470021' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 19:52:53.444014  428024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 19:52:53.444091  428024 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21801-266035/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-266035/.minikube}
	I1027 19:52:53.444125  428024 ubuntu.go:190] setting up certificates
	I1027 19:52:53.444166  428024 provision.go:84] configureAuth start
	I1027 19:52:53.444266  428024 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-470021
	I1027 19:52:53.464856  428024 provision.go:143] copyHostCerts
	I1027 19:52:53.464927  428024 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem, removing ...
	I1027 19:52:53.464943  428024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem
	I1027 19:52:53.465030  428024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem (1078 bytes)
	I1027 19:52:53.465155  428024 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem, removing ...
	I1027 19:52:53.465173  428024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem
	I1027 19:52:53.465209  428024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem (1123 bytes)
	I1027 19:52:53.465300  428024 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem, removing ...
	I1027 19:52:53.465306  428024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem
	I1027 19:52:53.465336  428024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem (1675 bytes)
	I1027 19:52:53.465419  428024 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem org=jenkins.pause-470021 san=[127.0.0.1 192.168.85.2 localhost minikube pause-470021]
	I1027 19:52:54.078148  428024 provision.go:177] copyRemoteCerts
	I1027 19:52:54.078240  428024 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 19:52:54.078319  428024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-470021
	I1027 19:52:54.096999  428024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/pause-470021/id_rsa Username:docker}
	I1027 19:52:54.202876  428024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 19:52:54.221087  428024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1027 19:52:54.239370  428024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1027 19:52:54.258080  428024 provision.go:87] duration metric: took 813.863997ms to configureAuth
	I1027 19:52:54.258108  428024 ubuntu.go:206] setting minikube options for container-runtime
	I1027 19:52:54.258316  428024 config.go:182] Loaded profile config "pause-470021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:52:54.258428  428024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-470021
	I1027 19:52:54.275632  428024 main.go:141] libmachine: Using SSH client type: native
	I1027 19:52:54.275947  428024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33383 <nil> <nil>}
	I1027 19:52:54.275974  428024 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 19:52:53.181982  412559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:52:53.194659  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:52:53.194729  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:52:53.222447  412559 cri.go:89] found id: ""
	I1027 19:52:53.222472  412559 logs.go:282] 0 containers: []
	W1027 19:52:53.222480  412559 logs.go:284] No container was found matching "kube-apiserver"
	I1027 19:52:53.222486  412559 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:52:53.222544  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:52:53.251578  412559 cri.go:89] found id: ""
	I1027 19:52:53.251599  412559 logs.go:282] 0 containers: []
	W1027 19:52:53.251607  412559 logs.go:284] No container was found matching "etcd"
	I1027 19:52:53.251613  412559 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:52:53.251670  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:52:53.283946  412559 cri.go:89] found id: ""
	I1027 19:52:53.284028  412559 logs.go:282] 0 containers: []
	W1027 19:52:53.284040  412559 logs.go:284] No container was found matching "coredns"
	I1027 19:52:53.284048  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:52:53.284117  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:52:53.326246  412559 cri.go:89] found id: ""
	I1027 19:52:53.326267  412559 logs.go:282] 0 containers: []
	W1027 19:52:53.326279  412559 logs.go:284] No container was found matching "kube-scheduler"
	I1027 19:52:53.326286  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:52:53.326342  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:52:53.365606  412559 cri.go:89] found id: ""
	I1027 19:52:53.365627  412559 logs.go:282] 0 containers: []
	W1027 19:52:53.365649  412559 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:52:53.365656  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:52:53.365735  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:52:53.403294  412559 cri.go:89] found id: ""
	I1027 19:52:53.403317  412559 logs.go:282] 0 containers: []
	W1027 19:52:53.403325  412559 logs.go:284] No container was found matching "kube-controller-manager"
	I1027 19:52:53.403332  412559 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:52:53.403393  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:52:53.439907  412559 cri.go:89] found id: ""
	I1027 19:52:53.439930  412559 logs.go:282] 0 containers: []
	W1027 19:52:53.439938  412559 logs.go:284] No container was found matching "kindnet"
	I1027 19:52:53.439945  412559 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:52:53.440027  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:52:53.492378  412559 cri.go:89] found id: ""
	I1027 19:52:53.492404  412559 logs.go:282] 0 containers: []
	W1027 19:52:53.492412  412559 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:52:53.492422  412559 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:52:53.492433  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:52:53.594178  412559 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:52:53.594204  412559 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:52:53.594216  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:52:53.635870  412559 logs.go:123] Gathering logs for container status ...
	I1027 19:52:53.635908  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:52:53.671773  412559 logs.go:123] Gathering logs for kubelet ...
	I1027 19:52:53.671800  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:52:53.809479  412559 logs.go:123] Gathering logs for dmesg ...
	I1027 19:52:53.809517  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:52:56.334666  412559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:52:56.344499  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:52:56.344566  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:52:56.369209  412559 cri.go:89] found id: ""
	I1027 19:52:56.369234  412559 logs.go:282] 0 containers: []
	W1027 19:52:56.369242  412559 logs.go:284] No container was found matching "kube-apiserver"
	I1027 19:52:56.369248  412559 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:52:56.369306  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:52:56.398131  412559 cri.go:89] found id: ""
	I1027 19:52:56.398152  412559 logs.go:282] 0 containers: []
	W1027 19:52:56.398160  412559 logs.go:284] No container was found matching "etcd"
	I1027 19:52:56.398166  412559 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:52:56.398223  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:52:56.423196  412559 cri.go:89] found id: ""
	I1027 19:52:56.423221  412559 logs.go:282] 0 containers: []
	W1027 19:52:56.423231  412559 logs.go:284] No container was found matching "coredns"
	I1027 19:52:56.423237  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:52:56.423297  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:52:56.448340  412559 cri.go:89] found id: ""
	I1027 19:52:56.448366  412559 logs.go:282] 0 containers: []
	W1027 19:52:56.448375  412559 logs.go:284] No container was found matching "kube-scheduler"
	I1027 19:52:56.448381  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:52:56.448439  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:52:56.472857  412559 cri.go:89] found id: ""
	I1027 19:52:56.472880  412559 logs.go:282] 0 containers: []
	W1027 19:52:56.472888  412559 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:52:56.472894  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:52:56.472952  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:52:56.498191  412559 cri.go:89] found id: ""
	I1027 19:52:56.498213  412559 logs.go:282] 0 containers: []
	W1027 19:52:56.498221  412559 logs.go:284] No container was found matching "kube-controller-manager"
	I1027 19:52:56.498234  412559 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:52:56.498293  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:52:56.524557  412559 cri.go:89] found id: ""
	I1027 19:52:56.524583  412559 logs.go:282] 0 containers: []
	W1027 19:52:56.524592  412559 logs.go:284] No container was found matching "kindnet"
	I1027 19:52:56.524599  412559 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:52:56.524661  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:52:56.553984  412559 cri.go:89] found id: ""
	I1027 19:52:56.554006  412559 logs.go:282] 0 containers: []
	W1027 19:52:56.554014  412559 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:52:56.554022  412559 logs.go:123] Gathering logs for kubelet ...
	I1027 19:52:56.554033  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:52:56.669122  412559 logs.go:123] Gathering logs for dmesg ...
	I1027 19:52:56.669158  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:52:56.688124  412559 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:52:56.688160  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:52:56.760658  412559 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:52:56.760679  412559 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:52:56.760695  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:52:56.796986  412559 logs.go:123] Gathering logs for container status ...
	I1027 19:52:56.797023  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:52:59.671676  428024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 19:52:59.671712  428024 machine.go:96] duration metric: took 6.76617363s to provisionDockerMachine
	I1027 19:52:59.671723  428024 start.go:293] postStartSetup for "pause-470021" (driver="docker")
	I1027 19:52:59.671734  428024 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 19:52:59.671795  428024 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 19:52:59.671836  428024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-470021
	I1027 19:52:59.698156  428024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/pause-470021/id_rsa Username:docker}
	I1027 19:52:59.812110  428024 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 19:52:59.816075  428024 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 19:52:59.816101  428024 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 19:52:59.816111  428024 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-266035/.minikube/addons for local assets ...
	I1027 19:52:59.816185  428024 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-266035/.minikube/files for local assets ...
	I1027 19:52:59.816258  428024 filesync.go:149] local asset: /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem -> 2678802.pem in /etc/ssl/certs
	I1027 19:52:59.816361  428024 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 19:52:59.824721  428024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem --> /etc/ssl/certs/2678802.pem (1708 bytes)
	I1027 19:52:59.857426  428024 start.go:296] duration metric: took 185.68734ms for postStartSetup
	I1027 19:52:59.857584  428024 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 19:52:59.857684  428024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-470021
	I1027 19:52:59.889201  428024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/pause-470021/id_rsa Username:docker}
	I1027 19:52:59.996724  428024 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 19:53:00.024059  428024 fix.go:56] duration metric: took 7.14016078s for fixHost
	I1027 19:53:00.024086  428024 start.go:83] releasing machines lock for "pause-470021", held for 7.140214858s
	I1027 19:53:00.024180  428024 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-470021
	I1027 19:53:00.107042  428024 ssh_runner.go:195] Run: cat /version.json
	I1027 19:53:00.109603  428024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-470021
	I1027 19:53:00.110426  428024 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 19:53:00.110505  428024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-470021
	I1027 19:53:00.169659  428024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/pause-470021/id_rsa Username:docker}
	I1027 19:53:00.185168  428024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/pause-470021/id_rsa Username:docker}
	I1027 19:53:00.348161  428024 ssh_runner.go:195] Run: systemctl --version
	I1027 19:53:00.442236  428024 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 19:53:00.490450  428024 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 19:53:00.496613  428024 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 19:53:00.496698  428024 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 19:53:00.507627  428024 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 19:53:00.507660  428024 start.go:495] detecting cgroup driver to use...
	I1027 19:53:00.507714  428024 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1027 19:53:00.507786  428024 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 19:53:00.525906  428024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 19:53:00.541747  428024 docker.go:218] disabling cri-docker service (if available) ...
	I1027 19:53:00.541876  428024 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 19:53:00.558929  428024 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 19:53:00.573702  428024 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 19:53:00.710600  428024 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 19:53:00.848899  428024 docker.go:234] disabling docker service ...
	I1027 19:53:00.848967  428024 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 19:53:00.864233  428024 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 19:53:00.877888  428024 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 19:53:01.010739  428024 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 19:53:01.146388  428024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 19:53:01.162092  428024 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 19:53:01.179818  428024 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 19:53:01.179917  428024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:53:01.189841  428024 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 19:53:01.189938  428024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:53:01.201524  428024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:53:01.212200  428024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:53:01.222332  428024 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 19:53:01.232024  428024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:53:01.242614  428024 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:53:01.252265  428024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:53:01.261986  428024 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 19:53:01.270348  428024 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 19:53:01.278425  428024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:53:01.433614  428024 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 19:53:01.613800  428024 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 19:53:01.613941  428024 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 19:53:01.617820  428024 start.go:563] Will wait 60s for crictl version
	I1027 19:53:01.617927  428024 ssh_runner.go:195] Run: which crictl
	I1027 19:53:01.621582  428024 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 19:53:01.651131  428024 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 19:53:01.651214  428024 ssh_runner.go:195] Run: crio --version
	I1027 19:53:01.680192  428024 ssh_runner.go:195] Run: crio --version
	I1027 19:53:01.718433  428024 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 19:53:01.721436  428024 cli_runner.go:164] Run: docker network inspect pause-470021 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 19:53:01.738558  428024 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1027 19:53:01.742668  428024 kubeadm.go:883] updating cluster {Name:pause-470021 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-470021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 19:53:01.742807  428024 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:53:01.742868  428024 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 19:53:01.777906  428024 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 19:53:01.777934  428024 crio.go:433] Images already preloaded, skipping extraction
	I1027 19:53:01.777990  428024 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 19:53:01.804042  428024 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 19:53:01.804067  428024 cache_images.go:85] Images are preloaded, skipping loading
	I1027 19:53:01.804075  428024 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1027 19:53:01.804175  428024 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-470021 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-470021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 19:53:01.804258  428024 ssh_runner.go:195] Run: crio config
	I1027 19:53:01.874113  428024 cni.go:84] Creating CNI manager for ""
	I1027 19:53:01.874137  428024 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:53:01.874162  428024 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 19:53:01.874194  428024 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-470021 NodeName:pause-470021 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 19:53:01.874367  428024 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-470021"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 19:53:01.874452  428024 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 19:53:01.883568  428024 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 19:53:01.883663  428024 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 19:53:01.891702  428024 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1027 19:53:01.910259  428024 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 19:53:01.925951  428024 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1027 19:53:01.940478  428024 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1027 19:53:01.944549  428024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:53:02.107884  428024 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:53:02.125608  428024 certs.go:69] Setting up /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/pause-470021 for IP: 192.168.85.2
	I1027 19:53:02.125679  428024 certs.go:195] generating shared ca certs ...
	I1027 19:53:02.125711  428024 certs.go:227] acquiring lock for ca certs: {Name:mk172548fc54811b51040cc0201d01eb4b3dd19b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:53:02.125889  428024 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21801-266035/.minikube/ca.key
	I1027 19:53:02.125977  428024 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.key
	I1027 19:53:02.126026  428024 certs.go:257] generating profile certs ...
	I1027 19:53:02.126167  428024 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/pause-470021/client.key
	I1027 19:53:02.126359  428024 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/pause-470021/apiserver.key.209bd21f
	I1027 19:53:02.126471  428024 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/pause-470021/proxy-client.key
	I1027 19:53:02.126632  428024 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880.pem (1338 bytes)
	W1027 19:53:02.126706  428024 certs.go:480] ignoring /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880_empty.pem, impossibly tiny 0 bytes
	I1027 19:53:02.126787  428024 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 19:53:02.126843  428024 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem (1078 bytes)
	I1027 19:53:02.126915  428024 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem (1123 bytes)
	I1027 19:53:02.126963  428024 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem (1675 bytes)
	I1027 19:53:02.127075  428024 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem (1708 bytes)
	I1027 19:53:02.127722  428024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 19:53:02.149244  428024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 19:53:02.170271  428024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 19:53:02.193420  428024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 19:53:02.215160  428024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/pause-470021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1027 19:53:02.238431  428024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/pause-470021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 19:53:02.259505  428024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/pause-470021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 19:53:02.282921  428024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/pause-470021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 19:53:02.303041  428024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem --> /usr/share/ca-certificates/2678802.pem (1708 bytes)
	I1027 19:53:02.325762  428024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 19:53:02.346255  428024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880.pem --> /usr/share/ca-certificates/267880.pem (1338 bytes)
	I1027 19:53:02.367999  428024 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 19:53:02.381777  428024 ssh_runner.go:195] Run: openssl version
	I1027 19:53:02.389001  428024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/267880.pem && ln -fs /usr/share/ca-certificates/267880.pem /etc/ssl/certs/267880.pem"
	I1027 19:53:02.397908  428024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/267880.pem
	I1027 19:53:02.402026  428024 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 19:04 /usr/share/ca-certificates/267880.pem
	I1027 19:53:02.402151  428024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/267880.pem
	I1027 19:53:02.447748  428024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/267880.pem /etc/ssl/certs/51391683.0"
	I1027 19:53:02.456228  428024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2678802.pem && ln -fs /usr/share/ca-certificates/2678802.pem /etc/ssl/certs/2678802.pem"
	I1027 19:53:02.465590  428024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2678802.pem
	I1027 19:53:02.471084  428024 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 19:04 /usr/share/ca-certificates/2678802.pem
	I1027 19:53:02.471202  428024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2678802.pem
	I1027 19:53:02.519062  428024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2678802.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 19:53:02.528161  428024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 19:53:02.537626  428024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:53:02.542127  428024 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:53:02.542247  428024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:53:02.587285  428024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 19:53:02.602167  428024 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 19:53:02.608362  428024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 19:52:59.325972  412559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:52:59.336207  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:52:59.336299  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:52:59.364925  412559 cri.go:89] found id: ""
	I1027 19:52:59.364950  412559 logs.go:282] 0 containers: []
	W1027 19:52:59.364958  412559 logs.go:284] No container was found matching "kube-apiserver"
	I1027 19:52:59.364993  412559 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:52:59.365056  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:52:59.391243  412559 cri.go:89] found id: ""
	I1027 19:52:59.391272  412559 logs.go:282] 0 containers: []
	W1027 19:52:59.391281  412559 logs.go:284] No container was found matching "etcd"
	I1027 19:52:59.391287  412559 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:52:59.391346  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:52:59.420038  412559 cri.go:89] found id: ""
	I1027 19:52:59.420061  412559 logs.go:282] 0 containers: []
	W1027 19:52:59.420070  412559 logs.go:284] No container was found matching "coredns"
	I1027 19:52:59.420076  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:52:59.420131  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:52:59.444929  412559 cri.go:89] found id: ""
	I1027 19:52:59.444952  412559 logs.go:282] 0 containers: []
	W1027 19:52:59.444961  412559 logs.go:284] No container was found matching "kube-scheduler"
	I1027 19:52:59.444967  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:52:59.445027  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:52:59.488742  412559 cri.go:89] found id: ""
	I1027 19:52:59.488768  412559 logs.go:282] 0 containers: []
	W1027 19:52:59.488777  412559 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:52:59.488784  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:52:59.488841  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:52:59.517263  412559 cri.go:89] found id: ""
	I1027 19:52:59.517288  412559 logs.go:282] 0 containers: []
	W1027 19:52:59.517296  412559 logs.go:284] No container was found matching "kube-controller-manager"
	I1027 19:52:59.517303  412559 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:52:59.517361  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:52:59.552224  412559 cri.go:89] found id: ""
	I1027 19:52:59.552250  412559 logs.go:282] 0 containers: []
	W1027 19:52:59.552258  412559 logs.go:284] No container was found matching "kindnet"
	I1027 19:52:59.552265  412559 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:52:59.552321  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:52:59.583504  412559 cri.go:89] found id: ""
	I1027 19:52:59.583530  412559 logs.go:282] 0 containers: []
	W1027 19:52:59.583539  412559 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:52:59.583548  412559 logs.go:123] Gathering logs for kubelet ...
	I1027 19:52:59.583560  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:52:59.731129  412559 logs.go:123] Gathering logs for dmesg ...
	I1027 19:52:59.731207  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:52:59.749151  412559 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:52:59.749181  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:52:59.831698  412559 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:52:59.831722  412559 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:52:59.831741  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:52:59.877271  412559 logs.go:123] Gathering logs for container status ...
	I1027 19:52:59.877568  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:53:02.424170  412559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:53:02.435696  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:53:02.435775  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:53:02.482557  412559 cri.go:89] found id: ""
	I1027 19:53:02.482593  412559 logs.go:282] 0 containers: []
	W1027 19:53:02.482601  412559 logs.go:284] No container was found matching "kube-apiserver"
	I1027 19:53:02.482608  412559 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:53:02.482680  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:53:02.533493  412559 cri.go:89] found id: ""
	I1027 19:53:02.533517  412559 logs.go:282] 0 containers: []
	W1027 19:53:02.533526  412559 logs.go:284] No container was found matching "etcd"
	I1027 19:53:02.533539  412559 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:53:02.533606  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:53:02.581773  412559 cri.go:89] found id: ""
	I1027 19:53:02.581796  412559 logs.go:282] 0 containers: []
	W1027 19:53:02.581804  412559 logs.go:284] No container was found matching "coredns"
	I1027 19:53:02.581819  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:53:02.581899  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:53:02.633484  412559 cri.go:89] found id: ""
	I1027 19:53:02.633518  412559 logs.go:282] 0 containers: []
	W1027 19:53:02.633526  412559 logs.go:284] No container was found matching "kube-scheduler"
	I1027 19:53:02.633533  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:53:02.633593  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:53:02.685560  412559 cri.go:89] found id: ""
	I1027 19:53:02.685585  412559 logs.go:282] 0 containers: []
	W1027 19:53:02.685593  412559 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:53:02.685600  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:53:02.685669  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:53:02.655560  428024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 19:53:02.699447  428024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 19:53:02.742646  428024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 19:53:02.844330  428024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 19:53:03.029854  428024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 19:53:03.169930  428024 kubeadm.go:400] StartCluster: {Name:pause-470021 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-470021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:53:03.170043  428024 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 19:53:03.170115  428024 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 19:53:03.300489  428024 cri.go:89] found id: "bb5755505420e2c40bd2f92567f06c5b2bff9cc40d5286bd60d82a6f776ca771"
	I1027 19:53:03.300513  428024 cri.go:89] found id: "c384025bdb6e4e5fd0f211129c90fa79ae691468550d2be99ba79f5cff89e73e"
	I1027 19:53:03.300518  428024 cri.go:89] found id: "d58e48bacf1495d3c508b1e7222843a56c1e79dfedb59aeb62a1c6aad514d698"
	I1027 19:53:03.300521  428024 cri.go:89] found id: "78decaae3b0e60034f2398cb0b994dce873210be4f725d847ff7529ca517374f"
	I1027 19:53:03.300525  428024 cri.go:89] found id: "669fb029456a2c87088307a9f1f18b0a214174bd7c9bb178f24f3ca5a7fc6e1b"
	I1027 19:53:03.300528  428024 cri.go:89] found id: "336f9e0fad2e20cdf64c964f42b31d59f4633709021335d2e471a64c05518236"
	I1027 19:53:03.300532  428024 cri.go:89] found id: "7eb02b6b3e6acbfaa83db3a48ac65c331d99ca20916430c27ada8e7bfb90bc22"
	I1027 19:53:03.300535  428024 cri.go:89] found id: "69a0468ab45ef90113638021bffb192716f99ab5157428ccbd881c54365dd32d"
	I1027 19:53:03.300538  428024 cri.go:89] found id: "87b11ab122b25bab328b375d9d04001497127ff2585367e02e286374156d6569"
	I1027 19:53:03.300548  428024 cri.go:89] found id: "bdd6aaebbf9b1f7984851798a00219d0ab9df585fc4aa577a1bf0220aa1fd7fc"
	I1027 19:53:03.300552  428024 cri.go:89] found id: "5bab714cb0c4efef9b8229cf91fc7171e0d11a7652571bf103319a343da548c2"
	I1027 19:53:03.300557  428024 cri.go:89] found id: "eb0212fd806c68f3148012bf7e975926ecbbcc8417725249917368a738dcb11d"
	I1027 19:53:03.300563  428024 cri.go:89] found id: "9c65fbe4eff1527a24e959481045341035c8c2ea0c34900ec330446573f13baf"
	I1027 19:53:03.300567  428024 cri.go:89] found id: ""
	I1027 19:53:03.300615  428024 ssh_runner.go:195] Run: sudo runc list -f json
	W1027 19:53:03.328231  428024 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:53:03Z" level=error msg="open /run/runc: no such file or directory"
	I1027 19:53:03.328312  428024 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 19:53:03.342843  428024 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1027 19:53:03.342866  428024 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1027 19:53:03.342921  428024 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 19:53:03.357859  428024 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 19:53:03.358484  428024 kubeconfig.go:125] found "pause-470021" server: "https://192.168.85.2:8443"
	I1027 19:53:03.359316  428024 kapi.go:59] client config for pause-470021: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21801-266035/.minikube/profiles/pause-470021/client.crt", KeyFile:"/home/jenkins/minikube-integration/21801-266035/.minikube/profiles/pause-470021/client.key", CAFile:"/home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1027 19:53:03.359814  428024 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1027 19:53:03.359835  428024 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1027 19:53:03.359840  428024 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1027 19:53:03.359845  428024 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1027 19:53:03.359850  428024 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1027 19:53:03.360152  428024 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 19:53:03.371308  428024 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1027 19:53:03.371342  428024 kubeadm.go:601] duration metric: took 28.470602ms to restartPrimaryControlPlane
	I1027 19:53:03.371352  428024 kubeadm.go:402] duration metric: took 201.43288ms to StartCluster
	I1027 19:53:03.371366  428024 settings.go:142] acquiring lock: {Name:mk49b9e2e9b7901c6b51e89b8b1e8ee7f9e88107 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:53:03.371427  428024 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 19:53:03.372275  428024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/kubeconfig: {Name:mk0a03a594f8d9f9199b1c7cff7c550cec8414c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:53:03.372500  428024 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 19:53:03.372853  428024 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 19:53:03.373113  428024 config.go:182] Loaded profile config "pause-470021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:53:03.376472  428024 out.go:179] * Enabled addons: 
	I1027 19:53:03.376536  428024 out.go:179] * Verifying Kubernetes components...
	I1027 19:53:03.379423  428024 addons.go:514] duration metric: took 6.55766ms for enable addons: enabled=[]
	I1027 19:53:03.379521  428024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:53:03.625783  428024 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:53:03.643053  428024 node_ready.go:35] waiting up to 6m0s for node "pause-470021" to be "Ready" ...
	I1027 19:53:02.749930  412559 cri.go:89] found id: ""
	I1027 19:53:02.749954  412559 logs.go:282] 0 containers: []
	W1027 19:53:02.749962  412559 logs.go:284] No container was found matching "kube-controller-manager"
	I1027 19:53:02.749969  412559 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:53:02.750029  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:53:02.797979  412559 cri.go:89] found id: ""
	I1027 19:53:02.798003  412559 logs.go:282] 0 containers: []
	W1027 19:53:02.798010  412559 logs.go:284] No container was found matching "kindnet"
	I1027 19:53:02.798016  412559 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:53:02.798091  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:53:02.850121  412559 cri.go:89] found id: ""
	I1027 19:53:02.850148  412559 logs.go:282] 0 containers: []
	W1027 19:53:02.850158  412559 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:53:02.850168  412559 logs.go:123] Gathering logs for kubelet ...
	I1027 19:53:02.850179  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:53:03.016596  412559 logs.go:123] Gathering logs for dmesg ...
	I1027 19:53:03.016633  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:53:03.040763  412559 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:53:03.040803  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:53:03.173573  412559 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:53:03.173597  412559 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:53:03.173609  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:53:03.228677  412559 logs.go:123] Gathering logs for container status ...
	I1027 19:53:03.228756  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:53:05.783107  412559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:53:05.796211  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:53:05.796291  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:53:05.845660  412559 cri.go:89] found id: ""
	I1027 19:53:05.845686  412559 logs.go:282] 0 containers: []
	W1027 19:53:05.845694  412559 logs.go:284] No container was found matching "kube-apiserver"
	I1027 19:53:05.845706  412559 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:53:05.845781  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:53:05.883372  412559 cri.go:89] found id: ""
	I1027 19:53:05.883398  412559 logs.go:282] 0 containers: []
	W1027 19:53:05.883407  412559 logs.go:284] No container was found matching "etcd"
	I1027 19:53:05.883413  412559 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:53:05.883474  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:53:05.933220  412559 cri.go:89] found id: ""
	I1027 19:53:05.933257  412559 logs.go:282] 0 containers: []
	W1027 19:53:05.933266  412559 logs.go:284] No container was found matching "coredns"
	I1027 19:53:05.933272  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:53:05.933333  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:53:05.982268  412559 cri.go:89] found id: ""
	I1027 19:53:05.982309  412559 logs.go:282] 0 containers: []
	W1027 19:53:05.982318  412559 logs.go:284] No container was found matching "kube-scheduler"
	I1027 19:53:05.982324  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:53:05.982385  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:53:06.030911  412559 cri.go:89] found id: ""
	I1027 19:53:06.030946  412559 logs.go:282] 0 containers: []
	W1027 19:53:06.030960  412559 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:53:06.030967  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:53:06.031106  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:53:06.084616  412559 cri.go:89] found id: ""
	I1027 19:53:06.084643  412559 logs.go:282] 0 containers: []
	W1027 19:53:06.084652  412559 logs.go:284] No container was found matching "kube-controller-manager"
	I1027 19:53:06.084659  412559 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:53:06.084717  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:53:06.128563  412559 cri.go:89] found id: ""
	I1027 19:53:06.128589  412559 logs.go:282] 0 containers: []
	W1027 19:53:06.128598  412559 logs.go:284] No container was found matching "kindnet"
	I1027 19:53:06.128604  412559 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:53:06.128661  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:53:06.167728  412559 cri.go:89] found id: ""
	I1027 19:53:06.167755  412559 logs.go:282] 0 containers: []
	W1027 19:53:06.167764  412559 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:53:06.167772  412559 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:53:06.167784  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:53:06.283538  412559 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:53:06.283565  412559 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:53:06.283579  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:53:06.336526  412559 logs.go:123] Gathering logs for container status ...
	I1027 19:53:06.336565  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:53:06.395115  412559 logs.go:123] Gathering logs for kubelet ...
	I1027 19:53:06.395145  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:53:06.532717  412559 logs.go:123] Gathering logs for dmesg ...
	I1027 19:53:06.532793  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:53:08.017130  428024 node_ready.go:49] node "pause-470021" is "Ready"
	I1027 19:53:08.017165  428024 node_ready.go:38] duration metric: took 4.374072687s for node "pause-470021" to be "Ready" ...
	I1027 19:53:08.017179  428024 api_server.go:52] waiting for apiserver process to appear ...
	I1027 19:53:08.017244  428024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:53:08.040528  428024 api_server.go:72] duration metric: took 4.667990726s to wait for apiserver process to appear ...
	I1027 19:53:08.040554  428024 api_server.go:88] waiting for apiserver healthz status ...
	I1027 19:53:08.040574  428024 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1027 19:53:08.082130  428024 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1027 19:53:08.082166  428024 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1027 19:53:08.540684  428024 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1027 19:53:08.555974  428024 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 19:53:08.556017  428024 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 19:53:09.041617  428024 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1027 19:53:09.079148  428024 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 19:53:09.079185  428024 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 19:53:09.540698  428024 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1027 19:53:09.552175  428024 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1027 19:53:09.553516  428024 api_server.go:141] control plane version: v1.34.1
	I1027 19:53:09.553544  428024 api_server.go:131] duration metric: took 1.512982393s to wait for apiserver health ...
	I1027 19:53:09.553556  428024 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 19:53:09.557305  428024 system_pods.go:59] 7 kube-system pods found
	I1027 19:53:09.557341  428024 system_pods.go:61] "coredns-66bc5c9577-nrzpx" [1ee91970-2f04-4fd7-b25b-8939d1ac7bd0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 19:53:09.557350  428024 system_pods.go:61] "etcd-pause-470021" [36794bba-7cf3-4ff8-85c5-4913406b2e6a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 19:53:09.557358  428024 system_pods.go:61] "kindnet-czq4c" [0b877aea-545c-4196-abcc-1c1856b6e3cb] Running
	I1027 19:53:09.557365  428024 system_pods.go:61] "kube-apiserver-pause-470021" [260f4617-c3c8-4e74-ab78-87bf979ca6b7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 19:53:09.557376  428024 system_pods.go:61] "kube-controller-manager-pause-470021" [56fc150e-c825-4df6-b176-f7a05a4b2b2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 19:53:09.557391  428024 system_pods.go:61] "kube-proxy-5tqdh" [096a2e44-c862-4412-a1d6-080237dfc726] Running
	I1027 19:53:09.557400  428024 system_pods.go:61] "kube-scheduler-pause-470021" [11c1368a-960e-41f5-94d2-0b087ec02a83] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 19:53:09.557410  428024 system_pods.go:74] duration metric: took 3.844508ms to wait for pod list to return data ...
	I1027 19:53:09.557419  428024 default_sa.go:34] waiting for default service account to be created ...
	I1027 19:53:09.567046  428024 default_sa.go:45] found service account: "default"
	I1027 19:53:09.567072  428024 default_sa.go:55] duration metric: took 9.641256ms for default service account to be created ...
	I1027 19:53:09.567082  428024 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 19:53:09.570325  428024 system_pods.go:86] 7 kube-system pods found
	I1027 19:53:09.570354  428024 system_pods.go:89] "coredns-66bc5c9577-nrzpx" [1ee91970-2f04-4fd7-b25b-8939d1ac7bd0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 19:53:09.570363  428024 system_pods.go:89] "etcd-pause-470021" [36794bba-7cf3-4ff8-85c5-4913406b2e6a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 19:53:09.570378  428024 system_pods.go:89] "kindnet-czq4c" [0b877aea-545c-4196-abcc-1c1856b6e3cb] Running
	I1027 19:53:09.570388  428024 system_pods.go:89] "kube-apiserver-pause-470021" [260f4617-c3c8-4e74-ab78-87bf979ca6b7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 19:53:09.570410  428024 system_pods.go:89] "kube-controller-manager-pause-470021" [56fc150e-c825-4df6-b176-f7a05a4b2b2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 19:53:09.570419  428024 system_pods.go:89] "kube-proxy-5tqdh" [096a2e44-c862-4412-a1d6-080237dfc726] Running
	I1027 19:53:09.570426  428024 system_pods.go:89] "kube-scheduler-pause-470021" [11c1368a-960e-41f5-94d2-0b087ec02a83] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 19:53:09.570432  428024 system_pods.go:126] duration metric: took 3.34474ms to wait for k8s-apps to be running ...
	I1027 19:53:09.570444  428024 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 19:53:09.570508  428024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:53:09.600954  428024 system_svc.go:56] duration metric: took 30.501788ms WaitForService to wait for kubelet
	I1027 19:53:09.600980  428024 kubeadm.go:586] duration metric: took 6.228448948s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 19:53:09.600998  428024 node_conditions.go:102] verifying NodePressure condition ...
	I1027 19:53:09.608346  428024 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 19:53:09.608379  428024 node_conditions.go:123] node cpu capacity is 2
	I1027 19:53:09.608391  428024 node_conditions.go:105] duration metric: took 7.387634ms to run NodePressure ...
	I1027 19:53:09.608404  428024 start.go:241] waiting for startup goroutines ...
	I1027 19:53:09.608421  428024 start.go:246] waiting for cluster config update ...
	I1027 19:53:09.608432  428024 start.go:255] writing updated cluster config ...
	I1027 19:53:09.608766  428024 ssh_runner.go:195] Run: rm -f paused
	I1027 19:53:09.617384  428024 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 19:53:09.618128  428024 kapi.go:59] client config for pause-470021: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21801-266035/.minikube/profiles/pause-470021/client.crt", KeyFile:"/home/jenkins/minikube-integration/21801-266035/.minikube/profiles/pause-470021/client.key", CAFile:"/home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1027 19:53:09.621948  428024 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-nrzpx" in "kube-system" namespace to be "Ready" or be gone ...
	W1027 19:53:11.626971  428024 pod_ready.go:104] pod "coredns-66bc5c9577-nrzpx" is not "Ready", error: <nil>
	I1027 19:53:09.056990  412559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:53:09.071512  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:53:09.071577  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:53:09.114515  412559 cri.go:89] found id: ""
	I1027 19:53:09.114536  412559 logs.go:282] 0 containers: []
	W1027 19:53:09.114544  412559 logs.go:284] No container was found matching "kube-apiserver"
	I1027 19:53:09.114550  412559 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:53:09.114615  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:53:09.173966  412559 cri.go:89] found id: ""
	I1027 19:53:09.173987  412559 logs.go:282] 0 containers: []
	W1027 19:53:09.173995  412559 logs.go:284] No container was found matching "etcd"
	I1027 19:53:09.174001  412559 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:53:09.174059  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:53:09.218833  412559 cri.go:89] found id: ""
	I1027 19:53:09.218854  412559 logs.go:282] 0 containers: []
	W1027 19:53:09.218862  412559 logs.go:284] No container was found matching "coredns"
	I1027 19:53:09.218868  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:53:09.218926  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:53:09.261080  412559 cri.go:89] found id: ""
	I1027 19:53:09.261152  412559 logs.go:282] 0 containers: []
	W1027 19:53:09.261176  412559 logs.go:284] No container was found matching "kube-scheduler"
	I1027 19:53:09.261198  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:53:09.261308  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:53:09.307949  412559 cri.go:89] found id: ""
	I1027 19:53:09.308023  412559 logs.go:282] 0 containers: []
	W1027 19:53:09.308046  412559 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:53:09.308077  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:53:09.308192  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:53:09.345274  412559 cri.go:89] found id: ""
	I1027 19:53:09.345346  412559 logs.go:282] 0 containers: []
	W1027 19:53:09.345370  412559 logs.go:284] No container was found matching "kube-controller-manager"
	I1027 19:53:09.345393  412559 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:53:09.345499  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:53:09.395232  412559 cri.go:89] found id: ""
	I1027 19:53:09.395304  412559 logs.go:282] 0 containers: []
	W1027 19:53:09.395328  412559 logs.go:284] No container was found matching "kindnet"
	I1027 19:53:09.395350  412559 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:53:09.395462  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:53:09.436724  412559 cri.go:89] found id: ""
	I1027 19:53:09.436795  412559 logs.go:282] 0 containers: []
	W1027 19:53:09.436819  412559 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:53:09.436844  412559 logs.go:123] Gathering logs for kubelet ...
	I1027 19:53:09.436891  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:53:09.597755  412559 logs.go:123] Gathering logs for dmesg ...
	I1027 19:53:09.600334  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:53:09.622513  412559 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:53:09.622582  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:53:09.707543  412559 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:53:09.707564  412559 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:53:09.707579  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:53:09.755694  412559 logs.go:123] Gathering logs for container status ...
	I1027 19:53:09.755767  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:53:12.304460  412559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:53:12.315342  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:53:12.315415  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:53:12.341661  412559 cri.go:89] found id: ""
	I1027 19:53:12.341688  412559 logs.go:282] 0 containers: []
	W1027 19:53:12.341696  412559 logs.go:284] No container was found matching "kube-apiserver"
	I1027 19:53:12.341703  412559 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:53:12.341760  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:53:12.368056  412559 cri.go:89] found id: ""
	I1027 19:53:12.368083  412559 logs.go:282] 0 containers: []
	W1027 19:53:12.368092  412559 logs.go:284] No container was found matching "etcd"
	I1027 19:53:12.368098  412559 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:53:12.368159  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:53:12.392153  412559 cri.go:89] found id: ""
	I1027 19:53:12.392180  412559 logs.go:282] 0 containers: []
	W1027 19:53:12.392190  412559 logs.go:284] No container was found matching "coredns"
	I1027 19:53:12.392197  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:53:12.392253  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:53:12.417131  412559 cri.go:89] found id: ""
	I1027 19:53:12.417156  412559 logs.go:282] 0 containers: []
	W1027 19:53:12.417165  412559 logs.go:284] No container was found matching "kube-scheduler"
	I1027 19:53:12.417172  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:53:12.417227  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:53:12.445499  412559 cri.go:89] found id: ""
	I1027 19:53:12.445524  412559 logs.go:282] 0 containers: []
	W1027 19:53:12.445533  412559 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:53:12.445540  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:53:12.445596  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:53:12.469986  412559 cri.go:89] found id: ""
	I1027 19:53:12.470010  412559 logs.go:282] 0 containers: []
	W1027 19:53:12.470018  412559 logs.go:284] No container was found matching "kube-controller-manager"
	I1027 19:53:12.470024  412559 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:53:12.470081  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:53:12.494372  412559 cri.go:89] found id: ""
	I1027 19:53:12.494396  412559 logs.go:282] 0 containers: []
	W1027 19:53:12.494409  412559 logs.go:284] No container was found matching "kindnet"
	I1027 19:53:12.494415  412559 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:53:12.494471  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:53:12.519982  412559 cri.go:89] found id: ""
	I1027 19:53:12.520006  412559 logs.go:282] 0 containers: []
	W1027 19:53:12.520015  412559 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:53:12.520024  412559 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:53:12.520042  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:53:12.585692  412559 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:53:12.585710  412559 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:53:12.585722  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:53:12.628515  412559 logs.go:123] Gathering logs for container status ...
	I1027 19:53:12.628554  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:53:12.660366  412559 logs.go:123] Gathering logs for kubelet ...
	I1027 19:53:12.660394  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1027 19:53:13.627543  428024 pod_ready.go:104] pod "coredns-66bc5c9577-nrzpx" is not "Ready", error: <nil>
	I1027 19:53:15.127810  428024 pod_ready.go:94] pod "coredns-66bc5c9577-nrzpx" is "Ready"
	I1027 19:53:15.127848  428024 pod_ready.go:86] duration metric: took 5.505872912s for pod "coredns-66bc5c9577-nrzpx" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:53:15.130952  428024 pod_ready.go:83] waiting for pod "etcd-pause-470021" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:53:15.135922  428024 pod_ready.go:94] pod "etcd-pause-470021" is "Ready"
	I1027 19:53:15.135955  428024 pod_ready.go:86] duration metric: took 4.973731ms for pod "etcd-pause-470021" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:53:15.138581  428024 pod_ready.go:83] waiting for pod "kube-apiserver-pause-470021" in "kube-system" namespace to be "Ready" or be gone ...
	W1027 19:53:17.143890  428024 pod_ready.go:104] pod "kube-apiserver-pause-470021" is not "Ready", error: <nil>
	I1027 19:53:12.780155  412559 logs.go:123] Gathering logs for dmesg ...
	I1027 19:53:12.780193  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:53:15.300491  412559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:53:15.310894  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:53:15.311019  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:53:15.336158  412559 cri.go:89] found id: ""
	I1027 19:53:15.336183  412559 logs.go:282] 0 containers: []
	W1027 19:53:15.336192  412559 logs.go:284] No container was found matching "kube-apiserver"
	I1027 19:53:15.336199  412559 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:53:15.336281  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:53:15.361745  412559 cri.go:89] found id: ""
	I1027 19:53:15.361769  412559 logs.go:282] 0 containers: []
	W1027 19:53:15.361777  412559 logs.go:284] No container was found matching "etcd"
	I1027 19:53:15.361783  412559 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:53:15.361841  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:53:15.387823  412559 cri.go:89] found id: ""
	I1027 19:53:15.387847  412559 logs.go:282] 0 containers: []
	W1027 19:53:15.387856  412559 logs.go:284] No container was found matching "coredns"
	I1027 19:53:15.387862  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:53:15.387921  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:53:15.413810  412559 cri.go:89] found id: ""
	I1027 19:53:15.413833  412559 logs.go:282] 0 containers: []
	W1027 19:53:15.413841  412559 logs.go:284] No container was found matching "kube-scheduler"
	I1027 19:53:15.413847  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:53:15.413913  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:53:15.441075  412559 cri.go:89] found id: ""
	I1027 19:53:15.441100  412559 logs.go:282] 0 containers: []
	W1027 19:53:15.441108  412559 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:53:15.441115  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:53:15.441179  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:53:15.466448  412559 cri.go:89] found id: ""
	I1027 19:53:15.466473  412559 logs.go:282] 0 containers: []
	W1027 19:53:15.466481  412559 logs.go:284] No container was found matching "kube-controller-manager"
	I1027 19:53:15.466488  412559 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:53:15.466555  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:53:15.495221  412559 cri.go:89] found id: ""
	I1027 19:53:15.495244  412559 logs.go:282] 0 containers: []
	W1027 19:53:15.495252  412559 logs.go:284] No container was found matching "kindnet"
	I1027 19:53:15.495261  412559 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:53:15.495321  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:53:15.527994  412559 cri.go:89] found id: ""
	I1027 19:53:15.528071  412559 logs.go:282] 0 containers: []
	W1027 19:53:15.528094  412559 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:53:15.528123  412559 logs.go:123] Gathering logs for kubelet ...
	I1027 19:53:15.528167  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:53:15.651122  412559 logs.go:123] Gathering logs for dmesg ...
	I1027 19:53:15.651156  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:53:15.671520  412559 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:53:15.671621  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:53:15.748692  412559 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:53:15.748712  412559 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:53:15.748726  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:53:15.787674  412559 logs.go:123] Gathering logs for container status ...
	I1027 19:53:15.787710  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1027 19:53:19.144700  428024 pod_ready.go:104] pod "kube-apiserver-pause-470021" is not "Ready", error: <nil>
	I1027 19:53:20.144218  428024 pod_ready.go:94] pod "kube-apiserver-pause-470021" is "Ready"
	I1027 19:53:20.144248  428024 pod_ready.go:86] duration metric: took 5.005642408s for pod "kube-apiserver-pause-470021" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:53:20.146480  428024 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-470021" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:53:20.151174  428024 pod_ready.go:94] pod "kube-controller-manager-pause-470021" is "Ready"
	I1027 19:53:20.151204  428024 pod_ready.go:86] duration metric: took 4.695749ms for pod "kube-controller-manager-pause-470021" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:53:20.153618  428024 pod_ready.go:83] waiting for pod "kube-proxy-5tqdh" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:53:20.158523  428024 pod_ready.go:94] pod "kube-proxy-5tqdh" is "Ready"
	I1027 19:53:20.158555  428024 pod_ready.go:86] duration metric: took 4.913687ms for pod "kube-proxy-5tqdh" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:53:20.161023  428024 pod_ready.go:83] waiting for pod "kube-scheduler-pause-470021" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:53:21.325038  428024 pod_ready.go:94] pod "kube-scheduler-pause-470021" is "Ready"
	I1027 19:53:21.325063  428024 pod_ready.go:86] duration metric: took 1.164012141s for pod "kube-scheduler-pause-470021" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:53:21.325075  428024 pod_ready.go:40] duration metric: took 11.707658495s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 19:53:21.400536  428024 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 19:53:21.403512  428024 out.go:179] * Done! kubectl is now configured to use "pause-470021" cluster and "default" namespace by default
	I1027 19:53:18.319783  412559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:53:18.329980  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:53:18.330048  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:53:18.359414  412559 cri.go:89] found id: ""
	I1027 19:53:18.359437  412559 logs.go:282] 0 containers: []
	W1027 19:53:18.359445  412559 logs.go:284] No container was found matching "kube-apiserver"
	I1027 19:53:18.359451  412559 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:53:18.359508  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:53:18.386400  412559 cri.go:89] found id: ""
	I1027 19:53:18.386425  412559 logs.go:282] 0 containers: []
	W1027 19:53:18.386439  412559 logs.go:284] No container was found matching "etcd"
	I1027 19:53:18.386446  412559 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:53:18.386507  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:53:18.416367  412559 cri.go:89] found id: ""
	I1027 19:53:18.416391  412559 logs.go:282] 0 containers: []
	W1027 19:53:18.416399  412559 logs.go:284] No container was found matching "coredns"
	I1027 19:53:18.416405  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:53:18.416464  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:53:18.441956  412559 cri.go:89] found id: ""
	I1027 19:53:18.441983  412559 logs.go:282] 0 containers: []
	W1027 19:53:18.441991  412559 logs.go:284] No container was found matching "kube-scheduler"
	I1027 19:53:18.441998  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:53:18.442060  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:53:18.469294  412559 cri.go:89] found id: ""
	I1027 19:53:18.469319  412559 logs.go:282] 0 containers: []
	W1027 19:53:18.469328  412559 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:53:18.469334  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:53:18.469395  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:53:18.495088  412559 cri.go:89] found id: ""
	I1027 19:53:18.495121  412559 logs.go:282] 0 containers: []
	W1027 19:53:18.495135  412559 logs.go:284] No container was found matching "kube-controller-manager"
	I1027 19:53:18.495144  412559 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:53:18.495208  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:53:18.522050  412559 cri.go:89] found id: ""
	I1027 19:53:18.522117  412559 logs.go:282] 0 containers: []
	W1027 19:53:18.522139  412559 logs.go:284] No container was found matching "kindnet"
	I1027 19:53:18.522163  412559 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:53:18.522256  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:53:18.547795  412559 cri.go:89] found id: ""
	I1027 19:53:18.547874  412559 logs.go:282] 0 containers: []
	W1027 19:53:18.547896  412559 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:53:18.547915  412559 logs.go:123] Gathering logs for kubelet ...
	I1027 19:53:18.547956  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:53:18.675628  412559 logs.go:123] Gathering logs for dmesg ...
	I1027 19:53:18.675664  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:53:18.693513  412559 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:53:18.693541  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:53:18.768051  412559 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:53:18.768072  412559 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:53:18.768083  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:53:18.804192  412559 logs.go:123] Gathering logs for container status ...
	I1027 19:53:18.804222  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:53:21.334620  412559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:53:21.346881  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:53:21.346949  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:53:21.382217  412559 cri.go:89] found id: ""
	I1027 19:53:21.382240  412559 logs.go:282] 0 containers: []
	W1027 19:53:21.382248  412559 logs.go:284] No container was found matching "kube-apiserver"
	I1027 19:53:21.382254  412559 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:53:21.382315  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:53:21.437854  412559 cri.go:89] found id: ""
	I1027 19:53:21.437877  412559 logs.go:282] 0 containers: []
	W1027 19:53:21.437895  412559 logs.go:284] No container was found matching "etcd"
	I1027 19:53:21.437902  412559 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:53:21.437974  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:53:21.469478  412559 cri.go:89] found id: ""
	I1027 19:53:21.469499  412559 logs.go:282] 0 containers: []
	W1027 19:53:21.469507  412559 logs.go:284] No container was found matching "coredns"
	I1027 19:53:21.469512  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:53:21.469571  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:53:21.504924  412559 cri.go:89] found id: ""
	I1027 19:53:21.504953  412559 logs.go:282] 0 containers: []
	W1027 19:53:21.504961  412559 logs.go:284] No container was found matching "kube-scheduler"
	I1027 19:53:21.504968  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:53:21.505027  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:53:21.536348  412559 cri.go:89] found id: ""
	I1027 19:53:21.536374  412559 logs.go:282] 0 containers: []
	W1027 19:53:21.536383  412559 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:53:21.536389  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:53:21.536452  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:53:21.575518  412559 cri.go:89] found id: ""
	I1027 19:53:21.575541  412559 logs.go:282] 0 containers: []
	W1027 19:53:21.575549  412559 logs.go:284] No container was found matching "kube-controller-manager"
	I1027 19:53:21.575556  412559 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:53:21.575616  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:53:21.606628  412559 cri.go:89] found id: ""
	I1027 19:53:21.606650  412559 logs.go:282] 0 containers: []
	W1027 19:53:21.606657  412559 logs.go:284] No container was found matching "kindnet"
	I1027 19:53:21.606672  412559 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:53:21.606737  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:53:21.640818  412559 cri.go:89] found id: ""
	I1027 19:53:21.640841  412559 logs.go:282] 0 containers: []
	W1027 19:53:21.640849  412559 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:53:21.640857  412559 logs.go:123] Gathering logs for kubelet ...
	I1027 19:53:21.640869  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:53:21.791954  412559 logs.go:123] Gathering logs for dmesg ...
	I1027 19:53:21.792034  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:53:21.811805  412559 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:53:21.811837  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:53:21.927994  412559 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:53:21.928024  412559 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:53:21.928036  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:53:21.967840  412559 logs.go:123] Gathering logs for container status ...
	I1027 19:53:21.967927  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	
	
	==> CRI-O <==
	Oct 27 19:53:03 pause-470021 crio[2071]: time="2025-10-27T19:53:03.244788743Z" level=info msg="Created container bb5755505420e2c40bd2f92567f06c5b2bff9cc40d5286bd60d82a6f776ca771: kube-system/etcd-pause-470021/etcd" id=dc4fde66-2a28-4e58-897b-1eb113d1f5eb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:53:03 pause-470021 crio[2071]: time="2025-10-27T19:53:03.249271928Z" level=info msg="Starting container: bb5755505420e2c40bd2f92567f06c5b2bff9cc40d5286bd60d82a6f776ca771" id=da7477d8-7781-4a43-af2e-5b33df231c81 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:53:03 pause-470021 crio[2071]: time="2025-10-27T19:53:03.249550451Z" level=info msg="Created container d58e48bacf1495d3c508b1e7222843a56c1e79dfedb59aeb62a1c6aad514d698: kube-system/kube-apiserver-pause-470021/kube-apiserver" id=64a90fc8-06b4-4848-80cd-0958f1d51440 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:53:03 pause-470021 crio[2071]: time="2025-10-27T19:53:03.25048009Z" level=info msg="Starting container: d58e48bacf1495d3c508b1e7222843a56c1e79dfedb59aeb62a1c6aad514d698" id=51e6cf69-7bdf-41d9-bd5f-25ddbebcb552 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:53:03 pause-470021 crio[2071]: time="2025-10-27T19:53:03.263342741Z" level=info msg="Started container" PID=2348 containerID=c384025bdb6e4e5fd0f211129c90fa79ae691468550d2be99ba79f5cff89e73e description=kube-system/kube-scheduler-pause-470021/kube-scheduler id=68a0fe43-e2ca-4158-851d-feff634241b6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=06c24f2f2365184cb8ba1348accc487257936b05bfd587e7a626a113fe97fc5d
	Oct 27 19:53:03 pause-470021 crio[2071]: time="2025-10-27T19:53:03.288370209Z" level=info msg="Started container" PID=2349 containerID=d58e48bacf1495d3c508b1e7222843a56c1e79dfedb59aeb62a1c6aad514d698 description=kube-system/kube-apiserver-pause-470021/kube-apiserver id=51e6cf69-7bdf-41d9-bd5f-25ddbebcb552 name=/runtime.v1.RuntimeService/StartContainer sandboxID=74428ecb1bf9a714ad7fe9173376f1ae6ab16f2310c94b853ccc94a092873ea3
	Oct 27 19:53:03 pause-470021 crio[2071]: time="2025-10-27T19:53:03.290261707Z" level=info msg="Started container" PID=2360 containerID=bb5755505420e2c40bd2f92567f06c5b2bff9cc40d5286bd60d82a6f776ca771 description=kube-system/etcd-pause-470021/etcd id=da7477d8-7781-4a43-af2e-5b33df231c81 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ab006d8a15ff1649b010b65645a16a00bd9c9ac8cf1250a0e5c7d897eb29673e
	Oct 27 19:53:03 pause-470021 crio[2071]: time="2025-10-27T19:53:03.312318767Z" level=info msg="Created container edb8ab36eea95aa2a08a1d4c6d4048ccc588d84a29202156e960705fc0fa969e: kube-system/kube-controller-manager-pause-470021/kube-controller-manager" id=ad8950be-85ad-43e3-a2f9-2c0664f29e95 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:53:03 pause-470021 crio[2071]: time="2025-10-27T19:53:03.313000421Z" level=info msg="Starting container: edb8ab36eea95aa2a08a1d4c6d4048ccc588d84a29202156e960705fc0fa969e" id=11c48ea4-3aa9-4a9b-aecd-d1173a592dcb name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:53:03 pause-470021 crio[2071]: time="2025-10-27T19:53:03.318382272Z" level=info msg="Started container" PID=2384 containerID=edb8ab36eea95aa2a08a1d4c6d4048ccc588d84a29202156e960705fc0fa969e description=kube-system/kube-controller-manager-pause-470021/kube-controller-manager id=11c48ea4-3aa9-4a9b-aecd-d1173a592dcb name=/runtime.v1.RuntimeService/StartContainer sandboxID=febdfbb1bd6f5d56d2fa07bdb9a0d8515537f990d385d50291de9bd6a8c816d2
	Oct 27 19:53:13 pause-470021 crio[2071]: time="2025-10-27T19:53:13.360668718Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 19:53:13 pause-470021 crio[2071]: time="2025-10-27T19:53:13.365034311Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 19:53:13 pause-470021 crio[2071]: time="2025-10-27T19:53:13.365067336Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 19:53:13 pause-470021 crio[2071]: time="2025-10-27T19:53:13.365092073Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 19:53:13 pause-470021 crio[2071]: time="2025-10-27T19:53:13.368166077Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 19:53:13 pause-470021 crio[2071]: time="2025-10-27T19:53:13.368197083Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 19:53:13 pause-470021 crio[2071]: time="2025-10-27T19:53:13.368218161Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 19:53:13 pause-470021 crio[2071]: time="2025-10-27T19:53:13.371869535Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 19:53:13 pause-470021 crio[2071]: time="2025-10-27T19:53:13.37190841Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 19:53:13 pause-470021 crio[2071]: time="2025-10-27T19:53:13.371967649Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 19:53:13 pause-470021 crio[2071]: time="2025-10-27T19:53:13.374737842Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 19:53:13 pause-470021 crio[2071]: time="2025-10-27T19:53:13.37476751Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 19:53:13 pause-470021 crio[2071]: time="2025-10-27T19:53:13.374797926Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 19:53:13 pause-470021 crio[2071]: time="2025-10-27T19:53:13.377698749Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 19:53:13 pause-470021 crio[2071]: time="2025-10-27T19:53:13.377728508Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	edb8ab36eea95       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   21 seconds ago       Running             kube-controller-manager   1                   febdfbb1bd6f5       kube-controller-manager-pause-470021   kube-system
	bb5755505420e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   21 seconds ago       Running             etcd                      1                   ab006d8a15ff1       etcd-pause-470021                      kube-system
	c384025bdb6e4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   21 seconds ago       Running             kube-scheduler            1                   06c24f2f23651       kube-scheduler-pause-470021            kube-system
	d58e48bacf149       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   21 seconds ago       Running             kube-apiserver            1                   74428ecb1bf9a       kube-apiserver-pause-470021            kube-system
	78decaae3b0e6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   21 seconds ago       Running             coredns                   1                   6b1e2709e2cfc       coredns-66bc5c9577-nrzpx               kube-system
	669fb029456a2       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   21 seconds ago       Running             kindnet-cni               1                   4c27dd3878bbc       kindnet-czq4c                          kube-system
	336f9e0fad2e2       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   21 seconds ago       Running             kube-proxy                1                   371646ff17027       kube-proxy-5tqdh                       kube-system
	7eb02b6b3e6ac       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   33 seconds ago       Exited              coredns                   0                   6b1e2709e2cfc       coredns-66bc5c9577-nrzpx               kube-system
	69a0468ab45ef       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   371646ff17027       kube-proxy-5tqdh                       kube-system
	87b11ab122b25       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   4c27dd3878bbc       kindnet-czq4c                          kube-system
	bdd6aaebbf9b1       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   74428ecb1bf9a       kube-apiserver-pause-470021            kube-system
	5bab714cb0c4e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   ab006d8a15ff1       etcd-pause-470021                      kube-system
	eb0212fd806c6       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   febdfbb1bd6f5       kube-controller-manager-pause-470021   kube-system
	9c65fbe4eff15       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   06c24f2f23651       kube-scheduler-pause-470021            kube-system
	
	
	==> coredns [78decaae3b0e60034f2398cb0b994dce873210be4f725d847ff7529ca517374f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41497 - 61636 "HINFO IN 303672438323418135.7660371047021629632. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.027076714s
	
	
	==> coredns [7eb02b6b3e6acbfaa83db3a48ac65c331d99ca20916430c27ada8e7bfb90bc22] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47037 - 55541 "HINFO IN 7061574596687865819.4425809311748167007. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014568381s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-470021
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-470021
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=pause-470021
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T19_52_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 19:51:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-470021
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 19:53:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 19:52:50 +0000   Mon, 27 Oct 2025 19:51:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 19:52:50 +0000   Mon, 27 Oct 2025 19:51:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 19:52:50 +0000   Mon, 27 Oct 2025 19:51:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 19:52:50 +0000   Mon, 27 Oct 2025 19:52:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-470021
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                b793bb73-fce9-4fb4-b85b-85a8efd2e97d
	  Boot ID:                    23ea05b4-8203-4fb9-a84a-5deb4b091cbb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-nrzpx                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     76s
	  kube-system                 etcd-pause-470021                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         83s
	  kube-system                 kindnet-czq4c                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      76s
	  kube-system                 kube-apiserver-pause-470021             250m (12%)    0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 kube-controller-manager-pause-470021    200m (10%)    0 (0%)      0 (0%)           0 (0%)         82s
	  kube-system                 kube-proxy-5tqdh                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-scheduler-pause-470021             100m (5%)     0 (0%)      0 (0%)           0 (0%)         82s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 73s                kube-proxy       
	  Normal   Starting                 14s                kube-proxy       
	  Normal   NodeHasSufficientPID     90s (x8 over 90s)  kubelet          Node pause-470021 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 90s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  90s (x8 over 90s)  kubelet          Node pause-470021 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    90s (x8 over 90s)  kubelet          Node pause-470021 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 90s                kubelet          Starting kubelet.
	  Normal   Starting                 82s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 82s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  82s                kubelet          Node pause-470021 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    82s                kubelet          Node pause-470021 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     82s                kubelet          Node pause-470021 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           77s                node-controller  Node pause-470021 event: Registered Node pause-470021 in Controller
	  Normal   NodeReady                34s                kubelet          Node pause-470021 status is now: NodeReady
	  Normal   RegisteredNode           13s                node-controller  Node pause-470021 event: Registered Node pause-470021 in Controller
	
	
	==> dmesg <==
	[Oct27 19:24] overlayfs: idmapped layers are currently not supported
	[Oct27 19:25] overlayfs: idmapped layers are currently not supported
	[Oct27 19:26] overlayfs: idmapped layers are currently not supported
	[  +3.069263] overlayfs: idmapped layers are currently not supported
	[Oct27 19:27] overlayfs: idmapped layers are currently not supported
	[ +40.518952] overlayfs: idmapped layers are currently not supported
	[Oct27 19:29] overlayfs: idmapped layers are currently not supported
	[Oct27 19:34] overlayfs: idmapped layers are currently not supported
	[ +33.986700] overlayfs: idmapped layers are currently not supported
	[Oct27 19:36] overlayfs: idmapped layers are currently not supported
	[Oct27 19:37] overlayfs: idmapped layers are currently not supported
	[Oct27 19:38] overlayfs: idmapped layers are currently not supported
	[Oct27 19:39] overlayfs: idmapped layers are currently not supported
	[Oct27 19:40] overlayfs: idmapped layers are currently not supported
	[  +7.015891] overlayfs: idmapped layers are currently not supported
	[Oct27 19:41] overlayfs: idmapped layers are currently not supported
	[ +28.455216] overlayfs: idmapped layers are currently not supported
	[Oct27 19:42] overlayfs: idmapped layers are currently not supported
	[ +26.490174] overlayfs: idmapped layers are currently not supported
	[Oct27 19:44] overlayfs: idmapped layers are currently not supported
	[Oct27 19:45] overlayfs: idmapped layers are currently not supported
	[Oct27 19:47] overlayfs: idmapped layers are currently not supported
	[Oct27 19:49] overlayfs: idmapped layers are currently not supported
	[ +31.410335] overlayfs: idmapped layers are currently not supported
	[Oct27 19:51] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5bab714cb0c4efef9b8229cf91fc7171e0d11a7652571bf103319a343da548c2] <==
	{"level":"warn","ts":"2025-10-27T19:51:57.628680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:51:57.639477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:51:57.662837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:51:57.703405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:51:57.763412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:51:57.765246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:51:57.869701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51930","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-27T19:52:54.455237Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-27T19:52:54.455290Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-470021","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-10-27T19:52:54.455386Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-27T19:52:54.604310Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-27T19:52:54.604398Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T19:52:54.604421Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-10-27T19:52:54.604457Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-27T19:52:54.604542Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-27T19:52:54.604576Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-27T19:52:54.604584Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T19:52:54.604566Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-27T19:52:54.604621Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-27T19:52:54.604673Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-27T19:52:54.604683Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T19:52:54.608027Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-10-27T19:52:54.608113Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T19:52:54.608155Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-27T19:52:54.608182Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-470021","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> etcd [bb5755505420e2c40bd2f92567f06c5b2bff9cc40d5286bd60d82a6f776ca771] <==
	{"level":"warn","ts":"2025-10-27T19:53:06.165091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.200705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.228159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.254922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.281014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.325373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.377690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.428382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.534101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.560045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.580129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.604425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.634104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.645837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.660031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.679882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.698812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.716153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.750907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.760289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.777018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.804255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.830711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.841966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.944796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60838","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:53:24 up  2:35,  0 user,  load average: 2.34, 2.65, 2.27
	Linux pause-470021 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [669fb029456a2c87088307a9f1f18b0a214174bd7c9bb178f24f3ca5a7fc6e1b] <==
	I1027 19:53:03.025503       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 19:53:03.025741       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1027 19:53:03.025865       1 main.go:148] setting mtu 1500 for CNI 
	I1027 19:53:03.025876       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 19:53:03.025887       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T19:53:03Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 19:53:03.371322       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 19:53:03.371418       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 19:53:03.371430       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 19:53:03.372410       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1027 19:53:03.372754       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1027 19:53:03.372850       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1027 19:53:03.372965       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1027 19:53:03.373076       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1027 19:53:08.175517       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 19:53:08.175559       1 metrics.go:72] Registering metrics
	I1027 19:53:08.175610       1 controller.go:711] "Syncing nftables rules"
	I1027 19:53:13.360127       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 19:53:13.360307       1 main.go:301] handling current node
	I1027 19:53:23.360196       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 19:53:23.360242       1 main.go:301] handling current node
	
	
	==> kindnet [87b11ab122b25bab328b375d9d04001497127ff2585367e02e286374156d6569] <==
	I1027 19:52:09.638204       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 19:52:09.715328       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1027 19:52:09.715473       1 main.go:148] setting mtu 1500 for CNI 
	I1027 19:52:09.715492       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 19:52:09.715507       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T19:52:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 19:52:09.916266       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 19:52:09.916424       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 19:52:09.916436       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 19:52:09.917527       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1027 19:52:39.916263       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1027 19:52:39.917444       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1027 19:52:39.917542       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1027 19:52:39.917638       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1027 19:52:41.517001       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 19:52:41.517038       1 metrics.go:72] Registering metrics
	I1027 19:52:41.517105       1 controller.go:711] "Syncing nftables rules"
	I1027 19:52:49.920496       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 19:52:49.920551       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bdd6aaebbf9b1f7984851798a00219d0ab9df585fc4aa577a1bf0220aa1fd7fc] <==
	W1027 19:52:54.468805       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.468871       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.468910       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.468947       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.468982       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.469018       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.469057       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.469095       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.469135       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.469172       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.469276       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.470314       1 logging.go:55] [core] [Channel #25 SubChannel #27]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.470365       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.470405       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.470444       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.470482       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.470523       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.470686       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.470736       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.470779       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.471514       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.471584       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.471627       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.476222       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.476343       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [d58e48bacf1495d3c508b1e7222843a56c1e79dfedb59aeb62a1c6aad514d698] <==
	I1027 19:53:08.089513       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1027 19:53:08.089631       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1027 19:53:08.097106       1 aggregator.go:171] initial CRD sync complete...
	I1027 19:53:08.097201       1 autoregister_controller.go:144] Starting autoregister controller
	I1027 19:53:08.097249       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 19:53:08.107314       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1027 19:53:08.122549       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1027 19:53:08.122641       1 policy_source.go:240] refreshing policies
	I1027 19:53:08.153248       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1027 19:53:08.153554       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1027 19:53:08.157264       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1027 19:53:08.157381       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1027 19:53:08.157399       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1027 19:53:08.159364       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1027 19:53:08.160999       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 19:53:08.169413       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 19:53:08.177132       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1027 19:53:08.186929       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1027 19:53:08.197716       1 cache.go:39] Caches are synced for autoregister controller
	I1027 19:53:08.763719       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 19:53:10.142027       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 19:53:11.517509       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 19:53:11.756754       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 19:53:11.806645       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 19:53:11.858682       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [eb0212fd806c68f3148012bf7e975926ecbbcc8417725249917368a738dcb11d] <==
	I1027 19:52:07.171760       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 19:52:07.214327       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1027 19:52:07.216766       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1027 19:52:07.216943       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1027 19:52:07.219248       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 19:52:07.219367       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 19:52:07.219408       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 19:52:07.219508       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 19:52:07.219535       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 19:52:07.219591       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1027 19:52:07.219640       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1027 19:52:07.219515       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1027 19:52:07.219526       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1027 19:52:07.226318       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1027 19:52:07.226426       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1027 19:52:07.226473       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1027 19:52:07.226503       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1027 19:52:07.226530       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1027 19:52:07.227076       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 19:52:07.231038       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 19:52:07.231167       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 19:52:07.231258       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-470021"
	I1027 19:52:07.231338       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1027 19:52:07.247155       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-470021" podCIDRs=["10.244.0.0/24"]
	I1027 19:52:52.237622       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [edb8ab36eea95aa2a08a1d4c6d4048ccc588d84a29202156e960705fc0fa969e] <==
	I1027 19:53:11.502438       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1027 19:53:11.502524       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1027 19:53:11.502577       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1027 19:53:11.502640       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1027 19:53:11.503958       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 19:53:11.504733       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 19:53:11.505123       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-470021"
	I1027 19:53:11.505178       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1027 19:53:11.505807       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 19:53:11.507230       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 19:53:11.510233       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 19:53:11.511855       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 19:53:11.514976       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1027 19:53:11.517383       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 19:53:11.518688       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1027 19:53:11.520918       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 19:53:11.520931       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 19:53:11.523984       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1027 19:53:11.525226       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1027 19:53:11.526403       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 19:53:11.542736       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 19:53:11.549072       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 19:53:11.549168       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 19:53:11.549202       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1027 19:53:11.550306       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [336f9e0fad2e20cdf64c964f42b31d59f4633709021335d2e471a64c05518236] <==
	I1027 19:53:03.003442       1 server_linux.go:53] "Using iptables proxy"
	I1027 19:53:03.237935       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1027 19:53:03.239112       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-470021&limit=500&resourceVersion=0\": dial tcp 192.168.85.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1027 19:53:08.238216       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 19:53:08.238362       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1027 19:53:08.238432       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 19:53:09.035066       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 19:53:09.035146       1 server_linux.go:132] "Using iptables Proxier"
	I1027 19:53:09.735224       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 19:53:09.735556       1 server.go:527] "Version info" version="v1.34.1"
	I1027 19:53:09.735579       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:53:09.739004       1 config.go:200] "Starting service config controller"
	I1027 19:53:09.739032       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 19:53:09.739057       1 config.go:106] "Starting endpoint slice config controller"
	I1027 19:53:09.739071       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 19:53:09.739196       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 19:53:09.739208       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 19:53:09.742085       1 config.go:309] "Starting node config controller"
	I1027 19:53:09.742100       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 19:53:09.742106       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 19:53:09.842413       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 19:53:09.842450       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 19:53:09.842489       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [69a0468ab45ef90113638021bffb192716f99ab5157428ccbd881c54365dd32d] <==
	I1027 19:52:10.241839       1 server_linux.go:53] "Using iptables proxy"
	I1027 19:52:10.317024       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 19:52:10.417817       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 19:52:10.417851       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1027 19:52:10.417958       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 19:52:10.437023       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 19:52:10.437078       1 server_linux.go:132] "Using iptables Proxier"
	I1027 19:52:10.440949       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 19:52:10.441242       1 server.go:527] "Version info" version="v1.34.1"
	I1027 19:52:10.441311       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:52:10.445503       1 config.go:200] "Starting service config controller"
	I1027 19:52:10.445532       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 19:52:10.446185       1 config.go:106] "Starting endpoint slice config controller"
	I1027 19:52:10.446205       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 19:52:10.447606       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 19:52:10.447632       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 19:52:10.448092       1 config.go:309] "Starting node config controller"
	I1027 19:52:10.448137       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 19:52:10.448168       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 19:52:10.546111       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 19:52:10.548269       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 19:52:10.548275       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9c65fbe4eff1527a24e959481045341035c8c2ea0c34900ec330446573f13baf] <==
	E1027 19:52:00.801448       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 19:52:00.801942       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1027 19:52:00.804299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 19:52:00.804478       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 19:52:00.806452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 19:52:00.806723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 19:52:00.809187       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 19:52:00.809760       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 19:52:00.809785       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 19:52:00.809843       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 19:52:00.809934       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1027 19:52:00.809947       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 19:52:00.810026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 19:52:00.810092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 19:52:00.810137       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 19:52:00.810267       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 19:52:00.810326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 19:52:00.810820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1027 19:52:01.987808       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:52:54.461964       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1027 19:52:54.462013       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1027 19:52:54.462032       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1027 19:52:54.462054       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:52:54.462261       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1027 19:52:54.462285       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c384025bdb6e4e5fd0f211129c90fa79ae691468550d2be99ba79f5cff89e73e] <==
	I1027 19:53:07.432436       1 serving.go:386] Generated self-signed cert in-memory
	I1027 19:53:09.437324       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 19:53:09.437352       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:53:09.446016       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 19:53:09.449212       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1027 19:53:09.449255       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1027 19:53:09.449288       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 19:53:09.463850       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:53:09.463959       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:53:09.464006       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 19:53:09.464280       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 19:53:09.550446       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1027 19:53:09.565413       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 19:53:09.565537       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 19:53:02 pause-470021 kubelet[1314]: E1027 19:53:02.971224    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-470021\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="9db0def9482bee08a3927a69a0e172a2" pod="kube-system/kube-apiserver-pause-470021"
	Oct 27 19:53:02 pause-470021 kubelet[1314]: E1027 19:53:02.982552    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-470021\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="b34aaf5d95741c4a53031f9f12fa5cc2" pod="kube-system/kube-scheduler-pause-470021"
	Oct 27 19:53:02 pause-470021 kubelet[1314]: E1027 19:53:02.983089    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-470021\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="9db0def9482bee08a3927a69a0e172a2" pod="kube-system/kube-apiserver-pause-470021"
	Oct 27 19:53:02 pause-470021 kubelet[1314]: E1027 19:53:02.983558    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-470021\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="02116935838487690dbac84a98c92f2e" pod="kube-system/etcd-pause-470021"
	Oct 27 19:53:02 pause-470021 kubelet[1314]: E1027 19:53:02.984095    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-470021\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="c51896afdd83921e4a292cc17c927160" pod="kube-system/kube-controller-manager-pause-470021"
	Oct 27 19:53:02 pause-470021 kubelet[1314]: E1027 19:53:02.984414    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-czq4c\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="0b877aea-545c-4196-abcc-1c1856b6e3cb" pod="kube-system/kindnet-czq4c"
	Oct 27 19:53:02 pause-470021 kubelet[1314]: E1027 19:53:02.984773    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5tqdh\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="096a2e44-c862-4412-a1d6-080237dfc726" pod="kube-system/kube-proxy-5tqdh"
	Oct 27 19:53:02 pause-470021 kubelet[1314]: E1027 19:53:02.985086    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-nrzpx\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="1ee91970-2f04-4fd7-b25b-8939d1ac7bd0" pod="kube-system/coredns-66bc5c9577-nrzpx"
	Oct 27 19:53:07 pause-470021 kubelet[1314]: E1027 19:53:07.854523    1314 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-470021\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-470021' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Oct 27 19:53:07 pause-470021 kubelet[1314]: E1027 19:53:07.855311    1314 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-470021\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-470021' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 27 19:53:07 pause-470021 kubelet[1314]: E1027 19:53:07.855584    1314 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-470021\" is forbidden: User \"system:node:pause-470021\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-470021' and this object" podUID="9db0def9482bee08a3927a69a0e172a2" pod="kube-system/kube-apiserver-pause-470021"
	Oct 27 19:53:07 pause-470021 kubelet[1314]: E1027 19:53:07.894002    1314 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-470021\" is forbidden: User \"system:node:pause-470021\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-470021' and this object" podUID="02116935838487690dbac84a98c92f2e" pod="kube-system/etcd-pause-470021"
	Oct 27 19:53:07 pause-470021 kubelet[1314]: E1027 19:53:07.964399    1314 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-470021\" is forbidden: User \"system:node:pause-470021\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-470021' and this object" podUID="c51896afdd83921e4a292cc17c927160" pod="kube-system/kube-controller-manager-pause-470021"
	Oct 27 19:53:08 pause-470021 kubelet[1314]: E1027 19:53:08.002702    1314 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-czq4c\" is forbidden: User \"system:node:pause-470021\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-470021' and this object" podUID="0b877aea-545c-4196-abcc-1c1856b6e3cb" pod="kube-system/kindnet-czq4c"
	Oct 27 19:53:08 pause-470021 kubelet[1314]: E1027 19:53:08.012844    1314 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-5tqdh\" is forbidden: User \"system:node:pause-470021\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-470021' and this object" podUID="096a2e44-c862-4412-a1d6-080237dfc726" pod="kube-system/kube-proxy-5tqdh"
	Oct 27 19:53:08 pause-470021 kubelet[1314]: E1027 19:53:08.025220    1314 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-nrzpx\" is forbidden: User \"system:node:pause-470021\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-470021' and this object" podUID="1ee91970-2f04-4fd7-b25b-8939d1ac7bd0" pod="kube-system/coredns-66bc5c9577-nrzpx"
	Oct 27 19:53:08 pause-470021 kubelet[1314]: E1027 19:53:08.049908    1314 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-470021\" is forbidden: User \"system:node:pause-470021\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-470021' and this object" podUID="b34aaf5d95741c4a53031f9f12fa5cc2" pod="kube-system/kube-scheduler-pause-470021"
	Oct 27 19:53:08 pause-470021 kubelet[1314]: E1027 19:53:08.104116    1314 status_manager.go:1018] "Failed to get status for pod" err=<
	Oct 27 19:53:08 pause-470021 kubelet[1314]:         pods "kindnet-czq4c" is forbidden: User "system:node:pause-470021" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-470021' and this object
	Oct 27 19:53:08 pause-470021 kubelet[1314]:         RBAC: [role.rbac.authorization.k8s.io "kubeadm:kubelet-config" not found, role.rbac.authorization.k8s.io "kubeadm:nodes-kubeadm-config" not found]
	Oct 27 19:53:08 pause-470021 kubelet[1314]:  > podUID="0b877aea-545c-4196-abcc-1c1856b6e3cb" pod="kube-system/kindnet-czq4c"
	Oct 27 19:53:12 pause-470021 kubelet[1314]: W1027 19:53:12.852489    1314 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 27 19:53:21 pause-470021 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 19:53:22 pause-470021 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 19:53:22 pause-470021 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-470021 -n pause-470021
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-470021 -n pause-470021: exit status 2 (356.542593ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-470021 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-470021
helpers_test.go:243: (dbg) docker inspect pause-470021:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "41e2ae07e79ce08dd76e9888603203ecd72a731510cd788d005065241ef8eb29",
	        "Created": "2025-10-27T19:51:33.257572081Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 423689,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T19:51:33.296886813Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/41e2ae07e79ce08dd76e9888603203ecd72a731510cd788d005065241ef8eb29/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/41e2ae07e79ce08dd76e9888603203ecd72a731510cd788d005065241ef8eb29/hostname",
	        "HostsPath": "/var/lib/docker/containers/41e2ae07e79ce08dd76e9888603203ecd72a731510cd788d005065241ef8eb29/hosts",
	        "LogPath": "/var/lib/docker/containers/41e2ae07e79ce08dd76e9888603203ecd72a731510cd788d005065241ef8eb29/41e2ae07e79ce08dd76e9888603203ecd72a731510cd788d005065241ef8eb29-json.log",
	        "Name": "/pause-470021",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-470021:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-470021",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "41e2ae07e79ce08dd76e9888603203ecd72a731510cd788d005065241ef8eb29",
	                "LowerDir": "/var/lib/docker/overlay2/0b21857dff85b810b4acf41af2d3a8653b571714cae7063f820663947d90dc11-init/diff:/var/lib/docker/overlay2/0567218bbe05e8b59b2aef7ad82032d6716040c1f9d9d73e7de8682101079f76/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0b21857dff85b810b4acf41af2d3a8653b571714cae7063f820663947d90dc11/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0b21857dff85b810b4acf41af2d3a8653b571714cae7063f820663947d90dc11/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0b21857dff85b810b4acf41af2d3a8653b571714cae7063f820663947d90dc11/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-470021",
	                "Source": "/var/lib/docker/volumes/pause-470021/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-470021",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-470021",
	                "name.minikube.sigs.k8s.io": "pause-470021",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "97b09cda74410479662655b29934acf254405d431f0b7a9555a703e4958cb74b",
	            "SandboxKey": "/var/run/docker/netns/97b09cda7441",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33383"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33384"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33387"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33385"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33386"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-470021": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:03:10:b4:60:91",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "14b16c2a2409a701f6d5ee6b9bdae0a104e3770e998b35acac7e64929d3d8416",
	                    "EndpointID": "6292a631a9b4a57367c7530fcf80ded420bcbc2ed0b724f8eb1eba2fe9e60023",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-470021",
	                        "41e2ae07e79c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-470021 -n pause-470021
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-470021 -n pause-470021: exit status 2 (339.407704ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-470021 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-470021 logs -n 25: (1.350834855s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-358331 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-358331       │ jenkins │ v1.37.0 │ 27 Oct 25 19:47 UTC │ 27 Oct 25 19:48 UTC │
	│ start   │ -p missing-upgrade-033557 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-033557    │ jenkins │ v1.32.0 │ 27 Oct 25 19:47 UTC │ 27 Oct 25 19:48 UTC │
	│ start   │ -p NoKubernetes-358331 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-358331       │ jenkins │ v1.37.0 │ 27 Oct 25 19:48 UTC │ 27 Oct 25 19:48 UTC │
	│ delete  │ -p NoKubernetes-358331                                                                                                                   │ NoKubernetes-358331       │ jenkins │ v1.37.0 │ 27 Oct 25 19:48 UTC │ 27 Oct 25 19:48 UTC │
	│ start   │ -p NoKubernetes-358331 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-358331       │ jenkins │ v1.37.0 │ 27 Oct 25 19:48 UTC │ 27 Oct 25 19:48 UTC │
	│ start   │ -p missing-upgrade-033557 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-033557    │ jenkins │ v1.37.0 │ 27 Oct 25 19:48 UTC │ 27 Oct 25 19:49 UTC │
	│ ssh     │ -p NoKubernetes-358331 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-358331       │ jenkins │ v1.37.0 │ 27 Oct 25 19:48 UTC │                     │
	│ stop    │ -p NoKubernetes-358331                                                                                                                   │ NoKubernetes-358331       │ jenkins │ v1.37.0 │ 27 Oct 25 19:48 UTC │ 27 Oct 25 19:48 UTC │
	│ start   │ -p NoKubernetes-358331 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-358331       │ jenkins │ v1.37.0 │ 27 Oct 25 19:48 UTC │ 27 Oct 25 19:48 UTC │
	│ ssh     │ -p NoKubernetes-358331 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-358331       │ jenkins │ v1.37.0 │ 27 Oct 25 19:48 UTC │                     │
	│ delete  │ -p NoKubernetes-358331                                                                                                                   │ NoKubernetes-358331       │ jenkins │ v1.37.0 │ 27 Oct 25 19:48 UTC │ 27 Oct 25 19:48 UTC │
	│ start   │ -p kubernetes-upgrade-524430 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-524430 │ jenkins │ v1.37.0 │ 27 Oct 25 19:48 UTC │ 27 Oct 25 19:49 UTC │
	│ delete  │ -p missing-upgrade-033557                                                                                                                │ missing-upgrade-033557    │ jenkins │ v1.37.0 │ 27 Oct 25 19:49 UTC │ 27 Oct 25 19:49 UTC │
	│ stop    │ -p kubernetes-upgrade-524430                                                                                                             │ kubernetes-upgrade-524430 │ jenkins │ v1.37.0 │ 27 Oct 25 19:49 UTC │ 27 Oct 25 19:49 UTC │
	│ start   │ -p stopped-upgrade-296733 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-296733    │ jenkins │ v1.32.0 │ 27 Oct 25 19:49 UTC │ 27 Oct 25 19:50 UTC │
	│ start   │ -p kubernetes-upgrade-524430 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-524430 │ jenkins │ v1.37.0 │ 27 Oct 25 19:49 UTC │                     │
	│ stop    │ stopped-upgrade-296733 stop                                                                                                              │ stopped-upgrade-296733    │ jenkins │ v1.32.0 │ 27 Oct 25 19:50 UTC │ 27 Oct 25 19:50 UTC │
	│ start   │ -p stopped-upgrade-296733 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-296733    │ jenkins │ v1.37.0 │ 27 Oct 25 19:50 UTC │ 27 Oct 25 19:50 UTC │
	│ delete  │ -p stopped-upgrade-296733                                                                                                                │ stopped-upgrade-296733    │ jenkins │ v1.37.0 │ 27 Oct 25 19:50 UTC │ 27 Oct 25 19:50 UTC │
	│ start   │ -p running-upgrade-048851 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-048851    │ jenkins │ v1.32.0 │ 27 Oct 25 19:50 UTC │ 27 Oct 25 19:51 UTC │
	│ start   │ -p running-upgrade-048851 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-048851    │ jenkins │ v1.37.0 │ 27 Oct 25 19:51 UTC │ 27 Oct 25 19:51 UTC │
	│ delete  │ -p running-upgrade-048851                                                                                                                │ running-upgrade-048851    │ jenkins │ v1.37.0 │ 27 Oct 25 19:51 UTC │ 27 Oct 25 19:51 UTC │
	│ start   │ -p pause-470021 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-470021              │ jenkins │ v1.37.0 │ 27 Oct 25 19:51 UTC │ 27 Oct 25 19:52 UTC │
	│ start   │ -p pause-470021 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-470021              │ jenkins │ v1.37.0 │ 27 Oct 25 19:52 UTC │ 27 Oct 25 19:53 UTC │
	│ pause   │ -p pause-470021 --alsologtostderr -v=5                                                                                                   │ pause-470021              │ jenkins │ v1.37.0 │ 27 Oct 25 19:53 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 19:52:52
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 19:52:52.649414  428024 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:52:52.649533  428024 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:52:52.649545  428024 out.go:374] Setting ErrFile to fd 2...
	I1027 19:52:52.649549  428024 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:52:52.649811  428024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 19:52:52.650165  428024 out.go:368] Setting JSON to false
	I1027 19:52:52.651168  428024 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9325,"bootTime":1761585448,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1027 19:52:52.651239  428024 start.go:141] virtualization:  
	I1027 19:52:52.654567  428024 out.go:179] * [pause-470021] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 19:52:52.658459  428024 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 19:52:52.658529  428024 notify.go:220] Checking for updates...
	I1027 19:52:52.664856  428024 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 19:52:52.667880  428024 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 19:52:52.670760  428024 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-266035/.minikube
	I1027 19:52:52.673712  428024 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 19:52:52.676665  428024 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 19:52:50.079112  412559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:52:50.090711  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:52:50.090796  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:52:50.119667  412559 cri.go:89] found id: ""
	I1027 19:52:50.119693  412559 logs.go:282] 0 containers: []
	W1027 19:52:50.119702  412559 logs.go:284] No container was found matching "kube-apiserver"
	I1027 19:52:50.119709  412559 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:52:50.119777  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:52:50.151344  412559 cri.go:89] found id: ""
	I1027 19:52:50.151380  412559 logs.go:282] 0 containers: []
	W1027 19:52:50.151389  412559 logs.go:284] No container was found matching "etcd"
	I1027 19:52:50.151396  412559 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:52:50.151461  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:52:50.188985  412559 cri.go:89] found id: ""
	I1027 19:52:50.189012  412559 logs.go:282] 0 containers: []
	W1027 19:52:50.189021  412559 logs.go:284] No container was found matching "coredns"
	I1027 19:52:50.189027  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:52:50.189094  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:52:50.223192  412559 cri.go:89] found id: ""
	I1027 19:52:50.223219  412559 logs.go:282] 0 containers: []
	W1027 19:52:50.223229  412559 logs.go:284] No container was found matching "kube-scheduler"
	I1027 19:52:50.223235  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:52:50.223297  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:52:50.250033  412559 cri.go:89] found id: ""
	I1027 19:52:50.250058  412559 logs.go:282] 0 containers: []
	W1027 19:52:50.250066  412559 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:52:50.250073  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:52:50.250132  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:52:50.276685  412559 cri.go:89] found id: ""
	I1027 19:52:50.276712  412559 logs.go:282] 0 containers: []
	W1027 19:52:50.276721  412559 logs.go:284] No container was found matching "kube-controller-manager"
	I1027 19:52:50.276728  412559 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:52:50.276808  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:52:50.316355  412559 cri.go:89] found id: ""
	I1027 19:52:50.316379  412559 logs.go:282] 0 containers: []
	W1027 19:52:50.316388  412559 logs.go:284] No container was found matching "kindnet"
	I1027 19:52:50.316397  412559 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:52:50.316478  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:52:50.341598  412559 cri.go:89] found id: ""
	I1027 19:52:50.341623  412559 logs.go:282] 0 containers: []
	W1027 19:52:50.341631  412559 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:52:50.341640  412559 logs.go:123] Gathering logs for container status ...
	I1027 19:52:50.341669  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:52:50.372324  412559 logs.go:123] Gathering logs for kubelet ...
	I1027 19:52:50.372396  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:52:50.505519  412559 logs.go:123] Gathering logs for dmesg ...
	I1027 19:52:50.505632  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:52:50.528408  412559 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:52:50.528484  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:52:50.637992  412559 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:52:50.638052  412559 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:52:50.638088  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:52:52.680117  428024 config.go:182] Loaded profile config "pause-470021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:52:52.680737  428024 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 19:52:52.703196  428024 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 19:52:52.703321  428024 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:52:52.772379  428024 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-27 19:52:52.762397353 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 19:52:52.772478  428024 docker.go:318] overlay module found
	I1027 19:52:52.775385  428024 out.go:179] * Using the docker driver based on existing profile
	I1027 19:52:52.778733  428024 start.go:305] selected driver: docker
	I1027 19:52:52.778749  428024 start.go:925] validating driver "docker" against &{Name:pause-470021 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-470021 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:52:52.778888  428024 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 19:52:52.779026  428024 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:52:52.846606  428024 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-27 19:52:52.837569074 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 19:52:52.847023  428024 cni.go:84] Creating CNI manager for ""
	I1027 19:52:52.847094  428024 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:52:52.847192  428024 start.go:349] cluster config:
	{Name:pause-470021 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-470021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:52:52.852312  428024 out.go:179] * Starting "pause-470021" primary control-plane node in "pause-470021" cluster
	I1027 19:52:52.855275  428024 cache.go:123] Beginning downloading kic base image for docker with crio
	I1027 19:52:52.858135  428024 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 19:52:52.860959  428024 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:52:52.861010  428024 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1027 19:52:52.861025  428024 cache.go:58] Caching tarball of preloaded images
	I1027 19:52:52.861049  428024 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 19:52:52.861110  428024 preload.go:233] Found /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1027 19:52:52.861119  428024 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 19:52:52.861260  428024 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/pause-470021/config.json ...
	I1027 19:52:52.883733  428024 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 19:52:52.883756  428024 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 19:52:52.883774  428024 cache.go:232] Successfully downloaded all kic artifacts
	I1027 19:52:52.883795  428024 start.go:360] acquireMachinesLock for pause-470021: {Name:mkafa68747e6c89df1b06354106458771898fc4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:52:52.883859  428024 start.go:364] duration metric: took 42.083µs to acquireMachinesLock for "pause-470021"
	I1027 19:52:52.883884  428024 start.go:96] Skipping create...Using existing machine configuration
	I1027 19:52:52.883889  428024 fix.go:54] fixHost starting: 
	I1027 19:52:52.884157  428024 cli_runner.go:164] Run: docker container inspect pause-470021 --format={{.State.Status}}
	I1027 19:52:52.900377  428024 fix.go:112] recreateIfNeeded on pause-470021: state=Running err=<nil>
	W1027 19:52:52.900416  428024 fix.go:138] unexpected machine state, will restart: <nil>
	I1027 19:52:52.905475  428024 out.go:252] * Updating the running docker "pause-470021" container ...
	I1027 19:52:52.905515  428024 machine.go:93] provisionDockerMachine start ...
	I1027 19:52:52.905614  428024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-470021
	I1027 19:52:52.922622  428024 main.go:141] libmachine: Using SSH client type: native
	I1027 19:52:52.922936  428024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33383 <nil> <nil>}
	I1027 19:52:52.922951  428024 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 19:52:53.074844  428024 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-470021
	
	I1027 19:52:53.074872  428024 ubuntu.go:182] provisioning hostname "pause-470021"
	I1027 19:52:53.074939  428024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-470021
	I1027 19:52:53.095153  428024 main.go:141] libmachine: Using SSH client type: native
	I1027 19:52:53.095459  428024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33383 <nil> <nil>}
	I1027 19:52:53.095470  428024 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-470021 && echo "pause-470021" | sudo tee /etc/hostname
	I1027 19:52:53.254578  428024 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-470021
	
	I1027 19:52:53.254650  428024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-470021
	I1027 19:52:53.280702  428024 main.go:141] libmachine: Using SSH client type: native
	I1027 19:52:53.281027  428024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33383 <nil> <nil>}
	I1027 19:52:53.281048  428024 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-470021' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-470021/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-470021' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 19:52:53.444014  428024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 19:52:53.444091  428024 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21801-266035/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-266035/.minikube}
	I1027 19:52:53.444125  428024 ubuntu.go:190] setting up certificates
	I1027 19:52:53.444166  428024 provision.go:84] configureAuth start
	I1027 19:52:53.444266  428024 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-470021
	I1027 19:52:53.464856  428024 provision.go:143] copyHostCerts
	I1027 19:52:53.464927  428024 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem, removing ...
	I1027 19:52:53.464943  428024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem
	I1027 19:52:53.465030  428024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem (1078 bytes)
	I1027 19:52:53.465155  428024 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem, removing ...
	I1027 19:52:53.465173  428024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem
	I1027 19:52:53.465209  428024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem (1123 bytes)
	I1027 19:52:53.465300  428024 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem, removing ...
	I1027 19:52:53.465306  428024 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem
	I1027 19:52:53.465336  428024 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem (1675 bytes)
	I1027 19:52:53.465419  428024 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem org=jenkins.pause-470021 san=[127.0.0.1 192.168.85.2 localhost minikube pause-470021]
	I1027 19:52:54.078148  428024 provision.go:177] copyRemoteCerts
	I1027 19:52:54.078240  428024 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 19:52:54.078319  428024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-470021
	I1027 19:52:54.096999  428024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/pause-470021/id_rsa Username:docker}
	I1027 19:52:54.202876  428024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 19:52:54.221087  428024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1027 19:52:54.239370  428024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1027 19:52:54.258080  428024 provision.go:87] duration metric: took 813.863997ms to configureAuth
	I1027 19:52:54.258108  428024 ubuntu.go:206] setting minikube options for container-runtime
	I1027 19:52:54.258316  428024 config.go:182] Loaded profile config "pause-470021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:52:54.258428  428024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-470021
	I1027 19:52:54.275632  428024 main.go:141] libmachine: Using SSH client type: native
	I1027 19:52:54.275947  428024 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33383 <nil> <nil>}
	I1027 19:52:54.275974  428024 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 19:52:53.181982  412559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:52:53.194659  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:52:53.194729  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:52:53.222447  412559 cri.go:89] found id: ""
	I1027 19:52:53.222472  412559 logs.go:282] 0 containers: []
	W1027 19:52:53.222480  412559 logs.go:284] No container was found matching "kube-apiserver"
	I1027 19:52:53.222486  412559 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:52:53.222544  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:52:53.251578  412559 cri.go:89] found id: ""
	I1027 19:52:53.251599  412559 logs.go:282] 0 containers: []
	W1027 19:52:53.251607  412559 logs.go:284] No container was found matching "etcd"
	I1027 19:52:53.251613  412559 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:52:53.251670  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:52:53.283946  412559 cri.go:89] found id: ""
	I1027 19:52:53.284028  412559 logs.go:282] 0 containers: []
	W1027 19:52:53.284040  412559 logs.go:284] No container was found matching "coredns"
	I1027 19:52:53.284048  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:52:53.284117  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:52:53.326246  412559 cri.go:89] found id: ""
	I1027 19:52:53.326267  412559 logs.go:282] 0 containers: []
	W1027 19:52:53.326279  412559 logs.go:284] No container was found matching "kube-scheduler"
	I1027 19:52:53.326286  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:52:53.326342  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:52:53.365606  412559 cri.go:89] found id: ""
	I1027 19:52:53.365627  412559 logs.go:282] 0 containers: []
	W1027 19:52:53.365649  412559 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:52:53.365656  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:52:53.365735  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:52:53.403294  412559 cri.go:89] found id: ""
	I1027 19:52:53.403317  412559 logs.go:282] 0 containers: []
	W1027 19:52:53.403325  412559 logs.go:284] No container was found matching "kube-controller-manager"
	I1027 19:52:53.403332  412559 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:52:53.403393  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:52:53.439907  412559 cri.go:89] found id: ""
	I1027 19:52:53.439930  412559 logs.go:282] 0 containers: []
	W1027 19:52:53.439938  412559 logs.go:284] No container was found matching "kindnet"
	I1027 19:52:53.439945  412559 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:52:53.440027  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:52:53.492378  412559 cri.go:89] found id: ""
	I1027 19:52:53.492404  412559 logs.go:282] 0 containers: []
	W1027 19:52:53.492412  412559 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:52:53.492422  412559 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:52:53.492433  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:52:53.594178  412559 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:52:53.594204  412559 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:52:53.594216  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:52:53.635870  412559 logs.go:123] Gathering logs for container status ...
	I1027 19:52:53.635908  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:52:53.671773  412559 logs.go:123] Gathering logs for kubelet ...
	I1027 19:52:53.671800  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:52:53.809479  412559 logs.go:123] Gathering logs for dmesg ...
	I1027 19:52:53.809517  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:52:56.334666  412559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:52:56.344499  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:52:56.344566  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:52:56.369209  412559 cri.go:89] found id: ""
	I1027 19:52:56.369234  412559 logs.go:282] 0 containers: []
	W1027 19:52:56.369242  412559 logs.go:284] No container was found matching "kube-apiserver"
	I1027 19:52:56.369248  412559 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:52:56.369306  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:52:56.398131  412559 cri.go:89] found id: ""
	I1027 19:52:56.398152  412559 logs.go:282] 0 containers: []
	W1027 19:52:56.398160  412559 logs.go:284] No container was found matching "etcd"
	I1027 19:52:56.398166  412559 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:52:56.398223  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:52:56.423196  412559 cri.go:89] found id: ""
	I1027 19:52:56.423221  412559 logs.go:282] 0 containers: []
	W1027 19:52:56.423231  412559 logs.go:284] No container was found matching "coredns"
	I1027 19:52:56.423237  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:52:56.423297  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:52:56.448340  412559 cri.go:89] found id: ""
	I1027 19:52:56.448366  412559 logs.go:282] 0 containers: []
	W1027 19:52:56.448375  412559 logs.go:284] No container was found matching "kube-scheduler"
	I1027 19:52:56.448381  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:52:56.448439  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:52:56.472857  412559 cri.go:89] found id: ""
	I1027 19:52:56.472880  412559 logs.go:282] 0 containers: []
	W1027 19:52:56.472888  412559 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:52:56.472894  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:52:56.472952  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:52:56.498191  412559 cri.go:89] found id: ""
	I1027 19:52:56.498213  412559 logs.go:282] 0 containers: []
	W1027 19:52:56.498221  412559 logs.go:284] No container was found matching "kube-controller-manager"
	I1027 19:52:56.498234  412559 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:52:56.498293  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:52:56.524557  412559 cri.go:89] found id: ""
	I1027 19:52:56.524583  412559 logs.go:282] 0 containers: []
	W1027 19:52:56.524592  412559 logs.go:284] No container was found matching "kindnet"
	I1027 19:52:56.524599  412559 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:52:56.524661  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:52:56.553984  412559 cri.go:89] found id: ""
	I1027 19:52:56.554006  412559 logs.go:282] 0 containers: []
	W1027 19:52:56.554014  412559 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:52:56.554022  412559 logs.go:123] Gathering logs for kubelet ...
	I1027 19:52:56.554033  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:52:56.669122  412559 logs.go:123] Gathering logs for dmesg ...
	I1027 19:52:56.669158  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:52:56.688124  412559 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:52:56.688160  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:52:56.760658  412559 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:52:56.760679  412559 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:52:56.760695  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:52:56.796986  412559 logs.go:123] Gathering logs for container status ...
	I1027 19:52:56.797023  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:52:59.671676  428024 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 19:52:59.671712  428024 machine.go:96] duration metric: took 6.76617363s to provisionDockerMachine
	I1027 19:52:59.671723  428024 start.go:293] postStartSetup for "pause-470021" (driver="docker")
	I1027 19:52:59.671734  428024 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 19:52:59.671795  428024 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 19:52:59.671836  428024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-470021
	I1027 19:52:59.698156  428024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/pause-470021/id_rsa Username:docker}
	I1027 19:52:59.812110  428024 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 19:52:59.816075  428024 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 19:52:59.816101  428024 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 19:52:59.816111  428024 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-266035/.minikube/addons for local assets ...
	I1027 19:52:59.816185  428024 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-266035/.minikube/files for local assets ...
	I1027 19:52:59.816258  428024 filesync.go:149] local asset: /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem -> 2678802.pem in /etc/ssl/certs
	I1027 19:52:59.816361  428024 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 19:52:59.824721  428024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem --> /etc/ssl/certs/2678802.pem (1708 bytes)
	I1027 19:52:59.857426  428024 start.go:296] duration metric: took 185.68734ms for postStartSetup
	I1027 19:52:59.857584  428024 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 19:52:59.857684  428024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-470021
	I1027 19:52:59.889201  428024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/pause-470021/id_rsa Username:docker}
	I1027 19:52:59.996724  428024 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 19:53:00.024059  428024 fix.go:56] duration metric: took 7.14016078s for fixHost
	I1027 19:53:00.024086  428024 start.go:83] releasing machines lock for "pause-470021", held for 7.140214858s
	I1027 19:53:00.024180  428024 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-470021
	I1027 19:53:00.107042  428024 ssh_runner.go:195] Run: cat /version.json
	I1027 19:53:00.109603  428024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-470021
	I1027 19:53:00.110426  428024 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 19:53:00.110505  428024 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-470021
	I1027 19:53:00.169659  428024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/pause-470021/id_rsa Username:docker}
	I1027 19:53:00.185168  428024 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/pause-470021/id_rsa Username:docker}
	I1027 19:53:00.348161  428024 ssh_runner.go:195] Run: systemctl --version
	I1027 19:53:00.442236  428024 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 19:53:00.490450  428024 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 19:53:00.496613  428024 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 19:53:00.496698  428024 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 19:53:00.507627  428024 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 19:53:00.507660  428024 start.go:495] detecting cgroup driver to use...
	I1027 19:53:00.507714  428024 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1027 19:53:00.507786  428024 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 19:53:00.525906  428024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 19:53:00.541747  428024 docker.go:218] disabling cri-docker service (if available) ...
	I1027 19:53:00.541876  428024 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 19:53:00.558929  428024 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 19:53:00.573702  428024 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 19:53:00.710600  428024 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 19:53:00.848899  428024 docker.go:234] disabling docker service ...
	I1027 19:53:00.848967  428024 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 19:53:00.864233  428024 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 19:53:00.877888  428024 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 19:53:01.010739  428024 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 19:53:01.146388  428024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 19:53:01.162092  428024 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 19:53:01.179818  428024 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 19:53:01.179917  428024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:53:01.189841  428024 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 19:53:01.189938  428024 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:53:01.201524  428024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:53:01.212200  428024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:53:01.222332  428024 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 19:53:01.232024  428024 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:53:01.242614  428024 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:53:01.252265  428024 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:53:01.261986  428024 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 19:53:01.270348  428024 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 19:53:01.278425  428024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:53:01.433614  428024 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 19:53:01.613800  428024 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 19:53:01.613941  428024 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 19:53:01.617820  428024 start.go:563] Will wait 60s for crictl version
	I1027 19:53:01.617927  428024 ssh_runner.go:195] Run: which crictl
	I1027 19:53:01.621582  428024 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 19:53:01.651131  428024 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 19:53:01.651214  428024 ssh_runner.go:195] Run: crio --version
	I1027 19:53:01.680192  428024 ssh_runner.go:195] Run: crio --version
	I1027 19:53:01.718433  428024 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 19:53:01.721436  428024 cli_runner.go:164] Run: docker network inspect pause-470021 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 19:53:01.738558  428024 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1027 19:53:01.742668  428024 kubeadm.go:883] updating cluster {Name:pause-470021 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-470021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 19:53:01.742807  428024 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:53:01.742868  428024 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 19:53:01.777906  428024 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 19:53:01.777934  428024 crio.go:433] Images already preloaded, skipping extraction
	I1027 19:53:01.777990  428024 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 19:53:01.804042  428024 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 19:53:01.804067  428024 cache_images.go:85] Images are preloaded, skipping loading
	I1027 19:53:01.804075  428024 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1027 19:53:01.804175  428024 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-470021 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-470021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 19:53:01.804258  428024 ssh_runner.go:195] Run: crio config
	I1027 19:53:01.874113  428024 cni.go:84] Creating CNI manager for ""
	I1027 19:53:01.874137  428024 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:53:01.874162  428024 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 19:53:01.874194  428024 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-470021 NodeName:pause-470021 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 19:53:01.874367  428024 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-470021"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 19:53:01.874452  428024 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 19:53:01.883568  428024 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 19:53:01.883663  428024 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 19:53:01.891702  428024 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1027 19:53:01.910259  428024 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 19:53:01.925951  428024 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1027 19:53:01.940478  428024 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1027 19:53:01.944549  428024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:53:02.107884  428024 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:53:02.125608  428024 certs.go:69] Setting up /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/pause-470021 for IP: 192.168.85.2
	I1027 19:53:02.125679  428024 certs.go:195] generating shared ca certs ...
	I1027 19:53:02.125711  428024 certs.go:227] acquiring lock for ca certs: {Name:mk172548fc54811b51040cc0201d01eb4b3dd19b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:53:02.125889  428024 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21801-266035/.minikube/ca.key
	I1027 19:53:02.125977  428024 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.key
	I1027 19:53:02.126026  428024 certs.go:257] generating profile certs ...
	I1027 19:53:02.126167  428024 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/pause-470021/client.key
	I1027 19:53:02.126359  428024 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/pause-470021/apiserver.key.209bd21f
	I1027 19:53:02.126471  428024 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/pause-470021/proxy-client.key
	I1027 19:53:02.126632  428024 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880.pem (1338 bytes)
	W1027 19:53:02.126706  428024 certs.go:480] ignoring /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880_empty.pem, impossibly tiny 0 bytes
	I1027 19:53:02.126787  428024 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 19:53:02.126843  428024 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem (1078 bytes)
	I1027 19:53:02.126915  428024 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem (1123 bytes)
	I1027 19:53:02.126963  428024 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem (1675 bytes)
	I1027 19:53:02.127075  428024 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem (1708 bytes)
	I1027 19:53:02.127722  428024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 19:53:02.149244  428024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 19:53:02.170271  428024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 19:53:02.193420  428024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 19:53:02.215160  428024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/pause-470021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1027 19:53:02.238431  428024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/pause-470021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 19:53:02.259505  428024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/pause-470021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 19:53:02.282921  428024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/pause-470021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 19:53:02.303041  428024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem --> /usr/share/ca-certificates/2678802.pem (1708 bytes)
	I1027 19:53:02.325762  428024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 19:53:02.346255  428024 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880.pem --> /usr/share/ca-certificates/267880.pem (1338 bytes)
	I1027 19:53:02.367999  428024 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 19:53:02.381777  428024 ssh_runner.go:195] Run: openssl version
	I1027 19:53:02.389001  428024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/267880.pem && ln -fs /usr/share/ca-certificates/267880.pem /etc/ssl/certs/267880.pem"
	I1027 19:53:02.397908  428024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/267880.pem
	I1027 19:53:02.402026  428024 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 19:04 /usr/share/ca-certificates/267880.pem
	I1027 19:53:02.402151  428024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/267880.pem
	I1027 19:53:02.447748  428024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/267880.pem /etc/ssl/certs/51391683.0"
	I1027 19:53:02.456228  428024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2678802.pem && ln -fs /usr/share/ca-certificates/2678802.pem /etc/ssl/certs/2678802.pem"
	I1027 19:53:02.465590  428024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2678802.pem
	I1027 19:53:02.471084  428024 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 19:04 /usr/share/ca-certificates/2678802.pem
	I1027 19:53:02.471202  428024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2678802.pem
	I1027 19:53:02.519062  428024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2678802.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 19:53:02.528161  428024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 19:53:02.537626  428024 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:53:02.542127  428024 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:53:02.542247  428024 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:53:02.587285  428024 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 19:53:02.602167  428024 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 19:53:02.608362  428024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 19:52:59.325972  412559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:52:59.336207  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:52:59.336299  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:52:59.364925  412559 cri.go:89] found id: ""
	I1027 19:52:59.364950  412559 logs.go:282] 0 containers: []
	W1027 19:52:59.364958  412559 logs.go:284] No container was found matching "kube-apiserver"
	I1027 19:52:59.364993  412559 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:52:59.365056  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:52:59.391243  412559 cri.go:89] found id: ""
	I1027 19:52:59.391272  412559 logs.go:282] 0 containers: []
	W1027 19:52:59.391281  412559 logs.go:284] No container was found matching "etcd"
	I1027 19:52:59.391287  412559 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:52:59.391346  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:52:59.420038  412559 cri.go:89] found id: ""
	I1027 19:52:59.420061  412559 logs.go:282] 0 containers: []
	W1027 19:52:59.420070  412559 logs.go:284] No container was found matching "coredns"
	I1027 19:52:59.420076  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:52:59.420131  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:52:59.444929  412559 cri.go:89] found id: ""
	I1027 19:52:59.444952  412559 logs.go:282] 0 containers: []
	W1027 19:52:59.444961  412559 logs.go:284] No container was found matching "kube-scheduler"
	I1027 19:52:59.444967  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:52:59.445027  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:52:59.488742  412559 cri.go:89] found id: ""
	I1027 19:52:59.488768  412559 logs.go:282] 0 containers: []
	W1027 19:52:59.488777  412559 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:52:59.488784  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:52:59.488841  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:52:59.517263  412559 cri.go:89] found id: ""
	I1027 19:52:59.517288  412559 logs.go:282] 0 containers: []
	W1027 19:52:59.517296  412559 logs.go:284] No container was found matching "kube-controller-manager"
	I1027 19:52:59.517303  412559 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:52:59.517361  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:52:59.552224  412559 cri.go:89] found id: ""
	I1027 19:52:59.552250  412559 logs.go:282] 0 containers: []
	W1027 19:52:59.552258  412559 logs.go:284] No container was found matching "kindnet"
	I1027 19:52:59.552265  412559 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:52:59.552321  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:52:59.583504  412559 cri.go:89] found id: ""
	I1027 19:52:59.583530  412559 logs.go:282] 0 containers: []
	W1027 19:52:59.583539  412559 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:52:59.583548  412559 logs.go:123] Gathering logs for kubelet ...
	I1027 19:52:59.583560  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:52:59.731129  412559 logs.go:123] Gathering logs for dmesg ...
	I1027 19:52:59.731207  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:52:59.749151  412559 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:52:59.749181  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:52:59.831698  412559 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:52:59.831722  412559 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:52:59.831741  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:52:59.877271  412559 logs.go:123] Gathering logs for container status ...
	I1027 19:52:59.877568  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:53:02.424170  412559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:53:02.435696  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:53:02.435775  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:53:02.482557  412559 cri.go:89] found id: ""
	I1027 19:53:02.482593  412559 logs.go:282] 0 containers: []
	W1027 19:53:02.482601  412559 logs.go:284] No container was found matching "kube-apiserver"
	I1027 19:53:02.482608  412559 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:53:02.482680  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:53:02.533493  412559 cri.go:89] found id: ""
	I1027 19:53:02.533517  412559 logs.go:282] 0 containers: []
	W1027 19:53:02.533526  412559 logs.go:284] No container was found matching "etcd"
	I1027 19:53:02.533539  412559 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:53:02.533606  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:53:02.581773  412559 cri.go:89] found id: ""
	I1027 19:53:02.581796  412559 logs.go:282] 0 containers: []
	W1027 19:53:02.581804  412559 logs.go:284] No container was found matching "coredns"
	I1027 19:53:02.581819  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:53:02.581899  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:53:02.633484  412559 cri.go:89] found id: ""
	I1027 19:53:02.633518  412559 logs.go:282] 0 containers: []
	W1027 19:53:02.633526  412559 logs.go:284] No container was found matching "kube-scheduler"
	I1027 19:53:02.633533  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:53:02.633593  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:53:02.685560  412559 cri.go:89] found id: ""
	I1027 19:53:02.685585  412559 logs.go:282] 0 containers: []
	W1027 19:53:02.685593  412559 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:53:02.685600  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:53:02.685669  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:53:02.655560  428024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 19:53:02.699447  428024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 19:53:02.742646  428024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 19:53:02.844330  428024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 19:53:03.029854  428024 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 19:53:03.169930  428024 kubeadm.go:400] StartCluster: {Name:pause-470021 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-470021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:53:03.170043  428024 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 19:53:03.170115  428024 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 19:53:03.300489  428024 cri.go:89] found id: "bb5755505420e2c40bd2f92567f06c5b2bff9cc40d5286bd60d82a6f776ca771"
	I1027 19:53:03.300513  428024 cri.go:89] found id: "c384025bdb6e4e5fd0f211129c90fa79ae691468550d2be99ba79f5cff89e73e"
	I1027 19:53:03.300518  428024 cri.go:89] found id: "d58e48bacf1495d3c508b1e7222843a56c1e79dfedb59aeb62a1c6aad514d698"
	I1027 19:53:03.300521  428024 cri.go:89] found id: "78decaae3b0e60034f2398cb0b994dce873210be4f725d847ff7529ca517374f"
	I1027 19:53:03.300525  428024 cri.go:89] found id: "669fb029456a2c87088307a9f1f18b0a214174bd7c9bb178f24f3ca5a7fc6e1b"
	I1027 19:53:03.300528  428024 cri.go:89] found id: "336f9e0fad2e20cdf64c964f42b31d59f4633709021335d2e471a64c05518236"
	I1027 19:53:03.300532  428024 cri.go:89] found id: "7eb02b6b3e6acbfaa83db3a48ac65c331d99ca20916430c27ada8e7bfb90bc22"
	I1027 19:53:03.300535  428024 cri.go:89] found id: "69a0468ab45ef90113638021bffb192716f99ab5157428ccbd881c54365dd32d"
	I1027 19:53:03.300538  428024 cri.go:89] found id: "87b11ab122b25bab328b375d9d04001497127ff2585367e02e286374156d6569"
	I1027 19:53:03.300548  428024 cri.go:89] found id: "bdd6aaebbf9b1f7984851798a00219d0ab9df585fc4aa577a1bf0220aa1fd7fc"
	I1027 19:53:03.300552  428024 cri.go:89] found id: "5bab714cb0c4efef9b8229cf91fc7171e0d11a7652571bf103319a343da548c2"
	I1027 19:53:03.300557  428024 cri.go:89] found id: "eb0212fd806c68f3148012bf7e975926ecbbcc8417725249917368a738dcb11d"
	I1027 19:53:03.300563  428024 cri.go:89] found id: "9c65fbe4eff1527a24e959481045341035c8c2ea0c34900ec330446573f13baf"
	I1027 19:53:03.300567  428024 cri.go:89] found id: ""
	I1027 19:53:03.300615  428024 ssh_runner.go:195] Run: sudo runc list -f json
	W1027 19:53:03.328231  428024 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:53:03Z" level=error msg="open /run/runc: no such file or directory"
	I1027 19:53:03.328312  428024 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 19:53:03.342843  428024 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1027 19:53:03.342866  428024 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1027 19:53:03.342921  428024 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 19:53:03.357859  428024 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 19:53:03.358484  428024 kubeconfig.go:125] found "pause-470021" server: "https://192.168.85.2:8443"
	I1027 19:53:03.359316  428024 kapi.go:59] client config for pause-470021: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21801-266035/.minikube/profiles/pause-470021/client.crt", KeyFile:"/home/jenkins/minikube-integration/21801-266035/.minikube/profiles/pause-470021/client.key", CAFile:"/home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1027 19:53:03.359814  428024 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1027 19:53:03.359835  428024 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1027 19:53:03.359840  428024 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1027 19:53:03.359845  428024 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1027 19:53:03.359850  428024 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1027 19:53:03.360152  428024 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 19:53:03.371308  428024 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1027 19:53:03.371342  428024 kubeadm.go:601] duration metric: took 28.470602ms to restartPrimaryControlPlane
	I1027 19:53:03.371352  428024 kubeadm.go:402] duration metric: took 201.43288ms to StartCluster
	I1027 19:53:03.371366  428024 settings.go:142] acquiring lock: {Name:mk49b9e2e9b7901c6b51e89b8b1e8ee7f9e88107 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:53:03.371427  428024 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 19:53:03.372275  428024 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/kubeconfig: {Name:mk0a03a594f8d9f9199b1c7cff7c550cec8414c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:53:03.372500  428024 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 19:53:03.372853  428024 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 19:53:03.373113  428024 config.go:182] Loaded profile config "pause-470021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:53:03.376472  428024 out.go:179] * Enabled addons: 
	I1027 19:53:03.376536  428024 out.go:179] * Verifying Kubernetes components...
	I1027 19:53:03.379423  428024 addons.go:514] duration metric: took 6.55766ms for enable addons: enabled=[]
	I1027 19:53:03.379521  428024 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:53:03.625783  428024 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:53:03.643053  428024 node_ready.go:35] waiting up to 6m0s for node "pause-470021" to be "Ready" ...
	I1027 19:53:02.749930  412559 cri.go:89] found id: ""
	I1027 19:53:02.749954  412559 logs.go:282] 0 containers: []
	W1027 19:53:02.749962  412559 logs.go:284] No container was found matching "kube-controller-manager"
	I1027 19:53:02.749969  412559 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:53:02.750029  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:53:02.797979  412559 cri.go:89] found id: ""
	I1027 19:53:02.798003  412559 logs.go:282] 0 containers: []
	W1027 19:53:02.798010  412559 logs.go:284] No container was found matching "kindnet"
	I1027 19:53:02.798016  412559 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:53:02.798091  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:53:02.850121  412559 cri.go:89] found id: ""
	I1027 19:53:02.850148  412559 logs.go:282] 0 containers: []
	W1027 19:53:02.850158  412559 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:53:02.850168  412559 logs.go:123] Gathering logs for kubelet ...
	I1027 19:53:02.850179  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:53:03.016596  412559 logs.go:123] Gathering logs for dmesg ...
	I1027 19:53:03.016633  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:53:03.040763  412559 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:53:03.040803  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:53:03.173573  412559 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:53:03.173597  412559 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:53:03.173609  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:53:03.228677  412559 logs.go:123] Gathering logs for container status ...
	I1027 19:53:03.228756  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:53:05.783107  412559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:53:05.796211  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:53:05.796291  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:53:05.845660  412559 cri.go:89] found id: ""
	I1027 19:53:05.845686  412559 logs.go:282] 0 containers: []
	W1027 19:53:05.845694  412559 logs.go:284] No container was found matching "kube-apiserver"
	I1027 19:53:05.845706  412559 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:53:05.845781  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:53:05.883372  412559 cri.go:89] found id: ""
	I1027 19:53:05.883398  412559 logs.go:282] 0 containers: []
	W1027 19:53:05.883407  412559 logs.go:284] No container was found matching "etcd"
	I1027 19:53:05.883413  412559 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:53:05.883474  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:53:05.933220  412559 cri.go:89] found id: ""
	I1027 19:53:05.933257  412559 logs.go:282] 0 containers: []
	W1027 19:53:05.933266  412559 logs.go:284] No container was found matching "coredns"
	I1027 19:53:05.933272  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:53:05.933333  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:53:05.982268  412559 cri.go:89] found id: ""
	I1027 19:53:05.982309  412559 logs.go:282] 0 containers: []
	W1027 19:53:05.982318  412559 logs.go:284] No container was found matching "kube-scheduler"
	I1027 19:53:05.982324  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:53:05.982385  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:53:06.030911  412559 cri.go:89] found id: ""
	I1027 19:53:06.030946  412559 logs.go:282] 0 containers: []
	W1027 19:53:06.030960  412559 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:53:06.030967  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:53:06.031106  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:53:06.084616  412559 cri.go:89] found id: ""
	I1027 19:53:06.084643  412559 logs.go:282] 0 containers: []
	W1027 19:53:06.084652  412559 logs.go:284] No container was found matching "kube-controller-manager"
	I1027 19:53:06.084659  412559 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:53:06.084717  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:53:06.128563  412559 cri.go:89] found id: ""
	I1027 19:53:06.128589  412559 logs.go:282] 0 containers: []
	W1027 19:53:06.128598  412559 logs.go:284] No container was found matching "kindnet"
	I1027 19:53:06.128604  412559 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:53:06.128661  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:53:06.167728  412559 cri.go:89] found id: ""
	I1027 19:53:06.167755  412559 logs.go:282] 0 containers: []
	W1027 19:53:06.167764  412559 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:53:06.167772  412559 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:53:06.167784  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:53:06.283538  412559 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:53:06.283565  412559 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:53:06.283579  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:53:06.336526  412559 logs.go:123] Gathering logs for container status ...
	I1027 19:53:06.336565  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:53:06.395115  412559 logs.go:123] Gathering logs for kubelet ...
	I1027 19:53:06.395145  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:53:06.532717  412559 logs.go:123] Gathering logs for dmesg ...
	I1027 19:53:06.532793  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:53:08.017130  428024 node_ready.go:49] node "pause-470021" is "Ready"
	I1027 19:53:08.017165  428024 node_ready.go:38] duration metric: took 4.374072687s for node "pause-470021" to be "Ready" ...
	I1027 19:53:08.017179  428024 api_server.go:52] waiting for apiserver process to appear ...
	I1027 19:53:08.017244  428024 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:53:08.040528  428024 api_server.go:72] duration metric: took 4.667990726s to wait for apiserver process to appear ...
	I1027 19:53:08.040554  428024 api_server.go:88] waiting for apiserver healthz status ...
	I1027 19:53:08.040574  428024 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1027 19:53:08.082130  428024 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1027 19:53:08.082166  428024 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1027 19:53:08.540684  428024 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1027 19:53:08.555974  428024 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 19:53:08.556017  428024 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 19:53:09.041617  428024 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1027 19:53:09.079148  428024 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 19:53:09.079185  428024 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 19:53:09.540698  428024 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1027 19:53:09.552175  428024 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1027 19:53:09.553516  428024 api_server.go:141] control plane version: v1.34.1
	I1027 19:53:09.553544  428024 api_server.go:131] duration metric: took 1.512982393s to wait for apiserver health ...
	I1027 19:53:09.553556  428024 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 19:53:09.557305  428024 system_pods.go:59] 7 kube-system pods found
	I1027 19:53:09.557341  428024 system_pods.go:61] "coredns-66bc5c9577-nrzpx" [1ee91970-2f04-4fd7-b25b-8939d1ac7bd0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 19:53:09.557350  428024 system_pods.go:61] "etcd-pause-470021" [36794bba-7cf3-4ff8-85c5-4913406b2e6a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 19:53:09.557358  428024 system_pods.go:61] "kindnet-czq4c" [0b877aea-545c-4196-abcc-1c1856b6e3cb] Running
	I1027 19:53:09.557365  428024 system_pods.go:61] "kube-apiserver-pause-470021" [260f4617-c3c8-4e74-ab78-87bf979ca6b7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 19:53:09.557376  428024 system_pods.go:61] "kube-controller-manager-pause-470021" [56fc150e-c825-4df6-b176-f7a05a4b2b2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 19:53:09.557391  428024 system_pods.go:61] "kube-proxy-5tqdh" [096a2e44-c862-4412-a1d6-080237dfc726] Running
	I1027 19:53:09.557400  428024 system_pods.go:61] "kube-scheduler-pause-470021" [11c1368a-960e-41f5-94d2-0b087ec02a83] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 19:53:09.557410  428024 system_pods.go:74] duration metric: took 3.844508ms to wait for pod list to return data ...
	I1027 19:53:09.557419  428024 default_sa.go:34] waiting for default service account to be created ...
	I1027 19:53:09.567046  428024 default_sa.go:45] found service account: "default"
	I1027 19:53:09.567072  428024 default_sa.go:55] duration metric: took 9.641256ms for default service account to be created ...
	I1027 19:53:09.567082  428024 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 19:53:09.570325  428024 system_pods.go:86] 7 kube-system pods found
	I1027 19:53:09.570354  428024 system_pods.go:89] "coredns-66bc5c9577-nrzpx" [1ee91970-2f04-4fd7-b25b-8939d1ac7bd0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 19:53:09.570363  428024 system_pods.go:89] "etcd-pause-470021" [36794bba-7cf3-4ff8-85c5-4913406b2e6a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 19:53:09.570378  428024 system_pods.go:89] "kindnet-czq4c" [0b877aea-545c-4196-abcc-1c1856b6e3cb] Running
	I1027 19:53:09.570388  428024 system_pods.go:89] "kube-apiserver-pause-470021" [260f4617-c3c8-4e74-ab78-87bf979ca6b7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 19:53:09.570410  428024 system_pods.go:89] "kube-controller-manager-pause-470021" [56fc150e-c825-4df6-b176-f7a05a4b2b2a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 19:53:09.570419  428024 system_pods.go:89] "kube-proxy-5tqdh" [096a2e44-c862-4412-a1d6-080237dfc726] Running
	I1027 19:53:09.570426  428024 system_pods.go:89] "kube-scheduler-pause-470021" [11c1368a-960e-41f5-94d2-0b087ec02a83] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 19:53:09.570432  428024 system_pods.go:126] duration metric: took 3.34474ms to wait for k8s-apps to be running ...
	I1027 19:53:09.570444  428024 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 19:53:09.570508  428024 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:53:09.600954  428024 system_svc.go:56] duration metric: took 30.501788ms WaitForService to wait for kubelet
	I1027 19:53:09.600980  428024 kubeadm.go:586] duration metric: took 6.228448948s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 19:53:09.600998  428024 node_conditions.go:102] verifying NodePressure condition ...
	I1027 19:53:09.608346  428024 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 19:53:09.608379  428024 node_conditions.go:123] node cpu capacity is 2
	I1027 19:53:09.608391  428024 node_conditions.go:105] duration metric: took 7.387634ms to run NodePressure ...
	I1027 19:53:09.608404  428024 start.go:241] waiting for startup goroutines ...
	I1027 19:53:09.608421  428024 start.go:246] waiting for cluster config update ...
	I1027 19:53:09.608432  428024 start.go:255] writing updated cluster config ...
	I1027 19:53:09.608766  428024 ssh_runner.go:195] Run: rm -f paused
	I1027 19:53:09.617384  428024 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 19:53:09.618128  428024 kapi.go:59] client config for pause-470021: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21801-266035/.minikube/profiles/pause-470021/client.crt", KeyFile:"/home/jenkins/minikube-integration/21801-266035/.minikube/profiles/pause-470021/client.key", CAFile:"/home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120240), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1027 19:53:09.621948  428024 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-nrzpx" in "kube-system" namespace to be "Ready" or be gone ...
	W1027 19:53:11.626971  428024 pod_ready.go:104] pod "coredns-66bc5c9577-nrzpx" is not "Ready", error: <nil>
	I1027 19:53:09.056990  412559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:53:09.071512  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:53:09.071577  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:53:09.114515  412559 cri.go:89] found id: ""
	I1027 19:53:09.114536  412559 logs.go:282] 0 containers: []
	W1027 19:53:09.114544  412559 logs.go:284] No container was found matching "kube-apiserver"
	I1027 19:53:09.114550  412559 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:53:09.114615  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:53:09.173966  412559 cri.go:89] found id: ""
	I1027 19:53:09.173987  412559 logs.go:282] 0 containers: []
	W1027 19:53:09.173995  412559 logs.go:284] No container was found matching "etcd"
	I1027 19:53:09.174001  412559 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:53:09.174059  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:53:09.218833  412559 cri.go:89] found id: ""
	I1027 19:53:09.218854  412559 logs.go:282] 0 containers: []
	W1027 19:53:09.218862  412559 logs.go:284] No container was found matching "coredns"
	I1027 19:53:09.218868  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:53:09.218926  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:53:09.261080  412559 cri.go:89] found id: ""
	I1027 19:53:09.261152  412559 logs.go:282] 0 containers: []
	W1027 19:53:09.261176  412559 logs.go:284] No container was found matching "kube-scheduler"
	I1027 19:53:09.261198  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:53:09.261308  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:53:09.307949  412559 cri.go:89] found id: ""
	I1027 19:53:09.308023  412559 logs.go:282] 0 containers: []
	W1027 19:53:09.308046  412559 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:53:09.308077  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:53:09.308192  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:53:09.345274  412559 cri.go:89] found id: ""
	I1027 19:53:09.345346  412559 logs.go:282] 0 containers: []
	W1027 19:53:09.345370  412559 logs.go:284] No container was found matching "kube-controller-manager"
	I1027 19:53:09.345393  412559 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:53:09.345499  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:53:09.395232  412559 cri.go:89] found id: ""
	I1027 19:53:09.395304  412559 logs.go:282] 0 containers: []
	W1027 19:53:09.395328  412559 logs.go:284] No container was found matching "kindnet"
	I1027 19:53:09.395350  412559 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:53:09.395462  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:53:09.436724  412559 cri.go:89] found id: ""
	I1027 19:53:09.436795  412559 logs.go:282] 0 containers: []
	W1027 19:53:09.436819  412559 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:53:09.436844  412559 logs.go:123] Gathering logs for kubelet ...
	I1027 19:53:09.436891  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:53:09.597755  412559 logs.go:123] Gathering logs for dmesg ...
	I1027 19:53:09.600334  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:53:09.622513  412559 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:53:09.622582  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:53:09.707543  412559 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:53:09.707564  412559 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:53:09.707579  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:53:09.755694  412559 logs.go:123] Gathering logs for container status ...
	I1027 19:53:09.755767  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:53:12.304460  412559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:53:12.315342  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:53:12.315415  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:53:12.341661  412559 cri.go:89] found id: ""
	I1027 19:53:12.341688  412559 logs.go:282] 0 containers: []
	W1027 19:53:12.341696  412559 logs.go:284] No container was found matching "kube-apiserver"
	I1027 19:53:12.341703  412559 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:53:12.341760  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:53:12.368056  412559 cri.go:89] found id: ""
	I1027 19:53:12.368083  412559 logs.go:282] 0 containers: []
	W1027 19:53:12.368092  412559 logs.go:284] No container was found matching "etcd"
	I1027 19:53:12.368098  412559 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:53:12.368159  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:53:12.392153  412559 cri.go:89] found id: ""
	I1027 19:53:12.392180  412559 logs.go:282] 0 containers: []
	W1027 19:53:12.392190  412559 logs.go:284] No container was found matching "coredns"
	I1027 19:53:12.392197  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:53:12.392253  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:53:12.417131  412559 cri.go:89] found id: ""
	I1027 19:53:12.417156  412559 logs.go:282] 0 containers: []
	W1027 19:53:12.417165  412559 logs.go:284] No container was found matching "kube-scheduler"
	I1027 19:53:12.417172  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:53:12.417227  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:53:12.445499  412559 cri.go:89] found id: ""
	I1027 19:53:12.445524  412559 logs.go:282] 0 containers: []
	W1027 19:53:12.445533  412559 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:53:12.445540  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:53:12.445596  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:53:12.469986  412559 cri.go:89] found id: ""
	I1027 19:53:12.470010  412559 logs.go:282] 0 containers: []
	W1027 19:53:12.470018  412559 logs.go:284] No container was found matching "kube-controller-manager"
	I1027 19:53:12.470024  412559 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:53:12.470081  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:53:12.494372  412559 cri.go:89] found id: ""
	I1027 19:53:12.494396  412559 logs.go:282] 0 containers: []
	W1027 19:53:12.494409  412559 logs.go:284] No container was found matching "kindnet"
	I1027 19:53:12.494415  412559 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:53:12.494471  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:53:12.519982  412559 cri.go:89] found id: ""
	I1027 19:53:12.520006  412559 logs.go:282] 0 containers: []
	W1027 19:53:12.520015  412559 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:53:12.520024  412559 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:53:12.520042  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:53:12.585692  412559 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:53:12.585710  412559 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:53:12.585722  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:53:12.628515  412559 logs.go:123] Gathering logs for container status ...
	I1027 19:53:12.628554  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:53:12.660366  412559 logs.go:123] Gathering logs for kubelet ...
	I1027 19:53:12.660394  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1027 19:53:13.627543  428024 pod_ready.go:104] pod "coredns-66bc5c9577-nrzpx" is not "Ready", error: <nil>
	I1027 19:53:15.127810  428024 pod_ready.go:94] pod "coredns-66bc5c9577-nrzpx" is "Ready"
	I1027 19:53:15.127848  428024 pod_ready.go:86] duration metric: took 5.505872912s for pod "coredns-66bc5c9577-nrzpx" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:53:15.130952  428024 pod_ready.go:83] waiting for pod "etcd-pause-470021" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:53:15.135922  428024 pod_ready.go:94] pod "etcd-pause-470021" is "Ready"
	I1027 19:53:15.135955  428024 pod_ready.go:86] duration metric: took 4.973731ms for pod "etcd-pause-470021" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:53:15.138581  428024 pod_ready.go:83] waiting for pod "kube-apiserver-pause-470021" in "kube-system" namespace to be "Ready" or be gone ...
	W1027 19:53:17.143890  428024 pod_ready.go:104] pod "kube-apiserver-pause-470021" is not "Ready", error: <nil>
	I1027 19:53:12.780155  412559 logs.go:123] Gathering logs for dmesg ...
	I1027 19:53:12.780193  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:53:15.300491  412559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:53:15.310894  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:53:15.311019  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:53:15.336158  412559 cri.go:89] found id: ""
	I1027 19:53:15.336183  412559 logs.go:282] 0 containers: []
	W1027 19:53:15.336192  412559 logs.go:284] No container was found matching "kube-apiserver"
	I1027 19:53:15.336199  412559 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:53:15.336281  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:53:15.361745  412559 cri.go:89] found id: ""
	I1027 19:53:15.361769  412559 logs.go:282] 0 containers: []
	W1027 19:53:15.361777  412559 logs.go:284] No container was found matching "etcd"
	I1027 19:53:15.361783  412559 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:53:15.361841  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:53:15.387823  412559 cri.go:89] found id: ""
	I1027 19:53:15.387847  412559 logs.go:282] 0 containers: []
	W1027 19:53:15.387856  412559 logs.go:284] No container was found matching "coredns"
	I1027 19:53:15.387862  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:53:15.387921  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:53:15.413810  412559 cri.go:89] found id: ""
	I1027 19:53:15.413833  412559 logs.go:282] 0 containers: []
	W1027 19:53:15.413841  412559 logs.go:284] No container was found matching "kube-scheduler"
	I1027 19:53:15.413847  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:53:15.413913  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:53:15.441075  412559 cri.go:89] found id: ""
	I1027 19:53:15.441100  412559 logs.go:282] 0 containers: []
	W1027 19:53:15.441108  412559 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:53:15.441115  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:53:15.441179  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:53:15.466448  412559 cri.go:89] found id: ""
	I1027 19:53:15.466473  412559 logs.go:282] 0 containers: []
	W1027 19:53:15.466481  412559 logs.go:284] No container was found matching "kube-controller-manager"
	I1027 19:53:15.466488  412559 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:53:15.466555  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:53:15.495221  412559 cri.go:89] found id: ""
	I1027 19:53:15.495244  412559 logs.go:282] 0 containers: []
	W1027 19:53:15.495252  412559 logs.go:284] No container was found matching "kindnet"
	I1027 19:53:15.495261  412559 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:53:15.495321  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:53:15.527994  412559 cri.go:89] found id: ""
	I1027 19:53:15.528071  412559 logs.go:282] 0 containers: []
	W1027 19:53:15.528094  412559 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:53:15.528123  412559 logs.go:123] Gathering logs for kubelet ...
	I1027 19:53:15.528167  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:53:15.651122  412559 logs.go:123] Gathering logs for dmesg ...
	I1027 19:53:15.651156  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:53:15.671520  412559 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:53:15.671621  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:53:15.748692  412559 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:53:15.748712  412559 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:53:15.748726  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:53:15.787674  412559 logs.go:123] Gathering logs for container status ...
	I1027 19:53:15.787710  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1027 19:53:19.144700  428024 pod_ready.go:104] pod "kube-apiserver-pause-470021" is not "Ready", error: <nil>
	I1027 19:53:20.144218  428024 pod_ready.go:94] pod "kube-apiserver-pause-470021" is "Ready"
	I1027 19:53:20.144248  428024 pod_ready.go:86] duration metric: took 5.005642408s for pod "kube-apiserver-pause-470021" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:53:20.146480  428024 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-470021" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:53:20.151174  428024 pod_ready.go:94] pod "kube-controller-manager-pause-470021" is "Ready"
	I1027 19:53:20.151204  428024 pod_ready.go:86] duration metric: took 4.695749ms for pod "kube-controller-manager-pause-470021" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:53:20.153618  428024 pod_ready.go:83] waiting for pod "kube-proxy-5tqdh" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:53:20.158523  428024 pod_ready.go:94] pod "kube-proxy-5tqdh" is "Ready"
	I1027 19:53:20.158555  428024 pod_ready.go:86] duration metric: took 4.913687ms for pod "kube-proxy-5tqdh" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:53:20.161023  428024 pod_ready.go:83] waiting for pod "kube-scheduler-pause-470021" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:53:21.325038  428024 pod_ready.go:94] pod "kube-scheduler-pause-470021" is "Ready"
	I1027 19:53:21.325063  428024 pod_ready.go:86] duration metric: took 1.164012141s for pod "kube-scheduler-pause-470021" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:53:21.325075  428024 pod_ready.go:40] duration metric: took 11.707658495s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 19:53:21.400536  428024 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 19:53:21.403512  428024 out.go:179] * Done! kubectl is now configured to use "pause-470021" cluster and "default" namespace by default
	I1027 19:53:18.319783  412559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:53:18.329980  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:53:18.330048  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:53:18.359414  412559 cri.go:89] found id: ""
	I1027 19:53:18.359437  412559 logs.go:282] 0 containers: []
	W1027 19:53:18.359445  412559 logs.go:284] No container was found matching "kube-apiserver"
	I1027 19:53:18.359451  412559 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:53:18.359508  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:53:18.386400  412559 cri.go:89] found id: ""
	I1027 19:53:18.386425  412559 logs.go:282] 0 containers: []
	W1027 19:53:18.386439  412559 logs.go:284] No container was found matching "etcd"
	I1027 19:53:18.386446  412559 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:53:18.386507  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:53:18.416367  412559 cri.go:89] found id: ""
	I1027 19:53:18.416391  412559 logs.go:282] 0 containers: []
	W1027 19:53:18.416399  412559 logs.go:284] No container was found matching "coredns"
	I1027 19:53:18.416405  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:53:18.416464  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:53:18.441956  412559 cri.go:89] found id: ""
	I1027 19:53:18.441983  412559 logs.go:282] 0 containers: []
	W1027 19:53:18.441991  412559 logs.go:284] No container was found matching "kube-scheduler"
	I1027 19:53:18.441998  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:53:18.442060  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:53:18.469294  412559 cri.go:89] found id: ""
	I1027 19:53:18.469319  412559 logs.go:282] 0 containers: []
	W1027 19:53:18.469328  412559 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:53:18.469334  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:53:18.469395  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:53:18.495088  412559 cri.go:89] found id: ""
	I1027 19:53:18.495121  412559 logs.go:282] 0 containers: []
	W1027 19:53:18.495135  412559 logs.go:284] No container was found matching "kube-controller-manager"
	I1027 19:53:18.495144  412559 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:53:18.495208  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:53:18.522050  412559 cri.go:89] found id: ""
	I1027 19:53:18.522117  412559 logs.go:282] 0 containers: []
	W1027 19:53:18.522139  412559 logs.go:284] No container was found matching "kindnet"
	I1027 19:53:18.522163  412559 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:53:18.522256  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:53:18.547795  412559 cri.go:89] found id: ""
	I1027 19:53:18.547874  412559 logs.go:282] 0 containers: []
	W1027 19:53:18.547896  412559 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:53:18.547915  412559 logs.go:123] Gathering logs for kubelet ...
	I1027 19:53:18.547956  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:53:18.675628  412559 logs.go:123] Gathering logs for dmesg ...
	I1027 19:53:18.675664  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:53:18.693513  412559 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:53:18.693541  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:53:18.768051  412559 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:53:18.768072  412559 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:53:18.768083  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:53:18.804192  412559 logs.go:123] Gathering logs for container status ...
	I1027 19:53:18.804222  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:53:21.334620  412559 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:53:21.346881  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:53:21.346949  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:53:21.382217  412559 cri.go:89] found id: ""
	I1027 19:53:21.382240  412559 logs.go:282] 0 containers: []
	W1027 19:53:21.382248  412559 logs.go:284] No container was found matching "kube-apiserver"
	I1027 19:53:21.382254  412559 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:53:21.382315  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:53:21.437854  412559 cri.go:89] found id: ""
	I1027 19:53:21.437877  412559 logs.go:282] 0 containers: []
	W1027 19:53:21.437895  412559 logs.go:284] No container was found matching "etcd"
	I1027 19:53:21.437902  412559 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:53:21.437974  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:53:21.469478  412559 cri.go:89] found id: ""
	I1027 19:53:21.469499  412559 logs.go:282] 0 containers: []
	W1027 19:53:21.469507  412559 logs.go:284] No container was found matching "coredns"
	I1027 19:53:21.469512  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:53:21.469571  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:53:21.504924  412559 cri.go:89] found id: ""
	I1027 19:53:21.504953  412559 logs.go:282] 0 containers: []
	W1027 19:53:21.504961  412559 logs.go:284] No container was found matching "kube-scheduler"
	I1027 19:53:21.504968  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:53:21.505027  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:53:21.536348  412559 cri.go:89] found id: ""
	I1027 19:53:21.536374  412559 logs.go:282] 0 containers: []
	W1027 19:53:21.536383  412559 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:53:21.536389  412559 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:53:21.536452  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:53:21.575518  412559 cri.go:89] found id: ""
	I1027 19:53:21.575541  412559 logs.go:282] 0 containers: []
	W1027 19:53:21.575549  412559 logs.go:284] No container was found matching "kube-controller-manager"
	I1027 19:53:21.575556  412559 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:53:21.575616  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:53:21.606628  412559 cri.go:89] found id: ""
	I1027 19:53:21.606650  412559 logs.go:282] 0 containers: []
	W1027 19:53:21.606657  412559 logs.go:284] No container was found matching "kindnet"
	I1027 19:53:21.606672  412559 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:53:21.606737  412559 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:53:21.640818  412559 cri.go:89] found id: ""
	I1027 19:53:21.640841  412559 logs.go:282] 0 containers: []
	W1027 19:53:21.640849  412559 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:53:21.640857  412559 logs.go:123] Gathering logs for kubelet ...
	I1027 19:53:21.640869  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:53:21.791954  412559 logs.go:123] Gathering logs for dmesg ...
	I1027 19:53:21.792034  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:53:21.811805  412559 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:53:21.811837  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:53:21.927994  412559 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:53:21.928024  412559 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:53:21.928036  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:53:21.967840  412559 logs.go:123] Gathering logs for container status ...
	I1027 19:53:21.967927  412559 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	
	
	==> CRI-O <==
	Oct 27 19:53:03 pause-470021 crio[2071]: time="2025-10-27T19:53:03.244788743Z" level=info msg="Created container bb5755505420e2c40bd2f92567f06c5b2bff9cc40d5286bd60d82a6f776ca771: kube-system/etcd-pause-470021/etcd" id=dc4fde66-2a28-4e58-897b-1eb113d1f5eb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:53:03 pause-470021 crio[2071]: time="2025-10-27T19:53:03.249271928Z" level=info msg="Starting container: bb5755505420e2c40bd2f92567f06c5b2bff9cc40d5286bd60d82a6f776ca771" id=da7477d8-7781-4a43-af2e-5b33df231c81 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:53:03 pause-470021 crio[2071]: time="2025-10-27T19:53:03.249550451Z" level=info msg="Created container d58e48bacf1495d3c508b1e7222843a56c1e79dfedb59aeb62a1c6aad514d698: kube-system/kube-apiserver-pause-470021/kube-apiserver" id=64a90fc8-06b4-4848-80cd-0958f1d51440 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:53:03 pause-470021 crio[2071]: time="2025-10-27T19:53:03.25048009Z" level=info msg="Starting container: d58e48bacf1495d3c508b1e7222843a56c1e79dfedb59aeb62a1c6aad514d698" id=51e6cf69-7bdf-41d9-bd5f-25ddbebcb552 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:53:03 pause-470021 crio[2071]: time="2025-10-27T19:53:03.263342741Z" level=info msg="Started container" PID=2348 containerID=c384025bdb6e4e5fd0f211129c90fa79ae691468550d2be99ba79f5cff89e73e description=kube-system/kube-scheduler-pause-470021/kube-scheduler id=68a0fe43-e2ca-4158-851d-feff634241b6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=06c24f2f2365184cb8ba1348accc487257936b05bfd587e7a626a113fe97fc5d
	Oct 27 19:53:03 pause-470021 crio[2071]: time="2025-10-27T19:53:03.288370209Z" level=info msg="Started container" PID=2349 containerID=d58e48bacf1495d3c508b1e7222843a56c1e79dfedb59aeb62a1c6aad514d698 description=kube-system/kube-apiserver-pause-470021/kube-apiserver id=51e6cf69-7bdf-41d9-bd5f-25ddbebcb552 name=/runtime.v1.RuntimeService/StartContainer sandboxID=74428ecb1bf9a714ad7fe9173376f1ae6ab16f2310c94b853ccc94a092873ea3
	Oct 27 19:53:03 pause-470021 crio[2071]: time="2025-10-27T19:53:03.290261707Z" level=info msg="Started container" PID=2360 containerID=bb5755505420e2c40bd2f92567f06c5b2bff9cc40d5286bd60d82a6f776ca771 description=kube-system/etcd-pause-470021/etcd id=da7477d8-7781-4a43-af2e-5b33df231c81 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ab006d8a15ff1649b010b65645a16a00bd9c9ac8cf1250a0e5c7d897eb29673e
	Oct 27 19:53:03 pause-470021 crio[2071]: time="2025-10-27T19:53:03.312318767Z" level=info msg="Created container edb8ab36eea95aa2a08a1d4c6d4048ccc588d84a29202156e960705fc0fa969e: kube-system/kube-controller-manager-pause-470021/kube-controller-manager" id=ad8950be-85ad-43e3-a2f9-2c0664f29e95 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:53:03 pause-470021 crio[2071]: time="2025-10-27T19:53:03.313000421Z" level=info msg="Starting container: edb8ab36eea95aa2a08a1d4c6d4048ccc588d84a29202156e960705fc0fa969e" id=11c48ea4-3aa9-4a9b-aecd-d1173a592dcb name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:53:03 pause-470021 crio[2071]: time="2025-10-27T19:53:03.318382272Z" level=info msg="Started container" PID=2384 containerID=edb8ab36eea95aa2a08a1d4c6d4048ccc588d84a29202156e960705fc0fa969e description=kube-system/kube-controller-manager-pause-470021/kube-controller-manager id=11c48ea4-3aa9-4a9b-aecd-d1173a592dcb name=/runtime.v1.RuntimeService/StartContainer sandboxID=febdfbb1bd6f5d56d2fa07bdb9a0d8515537f990d385d50291de9bd6a8c816d2
	Oct 27 19:53:13 pause-470021 crio[2071]: time="2025-10-27T19:53:13.360668718Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 19:53:13 pause-470021 crio[2071]: time="2025-10-27T19:53:13.365034311Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 19:53:13 pause-470021 crio[2071]: time="2025-10-27T19:53:13.365067336Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 19:53:13 pause-470021 crio[2071]: time="2025-10-27T19:53:13.365092073Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 19:53:13 pause-470021 crio[2071]: time="2025-10-27T19:53:13.368166077Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 19:53:13 pause-470021 crio[2071]: time="2025-10-27T19:53:13.368197083Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 19:53:13 pause-470021 crio[2071]: time="2025-10-27T19:53:13.368218161Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 19:53:13 pause-470021 crio[2071]: time="2025-10-27T19:53:13.371869535Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 19:53:13 pause-470021 crio[2071]: time="2025-10-27T19:53:13.37190841Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 19:53:13 pause-470021 crio[2071]: time="2025-10-27T19:53:13.371967649Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 19:53:13 pause-470021 crio[2071]: time="2025-10-27T19:53:13.374737842Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 19:53:13 pause-470021 crio[2071]: time="2025-10-27T19:53:13.37476751Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 19:53:13 pause-470021 crio[2071]: time="2025-10-27T19:53:13.374797926Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 19:53:13 pause-470021 crio[2071]: time="2025-10-27T19:53:13.377698749Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 19:53:13 pause-470021 crio[2071]: time="2025-10-27T19:53:13.377728508Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	edb8ab36eea95       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   23 seconds ago       Running             kube-controller-manager   1                   febdfbb1bd6f5       kube-controller-manager-pause-470021   kube-system
	bb5755505420e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   23 seconds ago       Running             etcd                      1                   ab006d8a15ff1       etcd-pause-470021                      kube-system
	c384025bdb6e4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   23 seconds ago       Running             kube-scheduler            1                   06c24f2f23651       kube-scheduler-pause-470021            kube-system
	d58e48bacf149       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   23 seconds ago       Running             kube-apiserver            1                   74428ecb1bf9a       kube-apiserver-pause-470021            kube-system
	78decaae3b0e6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   23 seconds ago       Running             coredns                   1                   6b1e2709e2cfc       coredns-66bc5c9577-nrzpx               kube-system
	669fb029456a2       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   23 seconds ago       Running             kindnet-cni               1                   4c27dd3878bbc       kindnet-czq4c                          kube-system
	336f9e0fad2e2       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   23 seconds ago       Running             kube-proxy                1                   371646ff17027       kube-proxy-5tqdh                       kube-system
	7eb02b6b3e6ac       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   36 seconds ago       Exited              coredns                   0                   6b1e2709e2cfc       coredns-66bc5c9577-nrzpx               kube-system
	69a0468ab45ef       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   371646ff17027       kube-proxy-5tqdh                       kube-system
	87b11ab122b25       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   4c27dd3878bbc       kindnet-czq4c                          kube-system
	bdd6aaebbf9b1       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   74428ecb1bf9a       kube-apiserver-pause-470021            kube-system
	5bab714cb0c4e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   ab006d8a15ff1       etcd-pause-470021                      kube-system
	eb0212fd806c6       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   febdfbb1bd6f5       kube-controller-manager-pause-470021   kube-system
	9c65fbe4eff15       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   06c24f2f23651       kube-scheduler-pause-470021            kube-system
	
	
	==> coredns [78decaae3b0e60034f2398cb0b994dce873210be4f725d847ff7529ca517374f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41497 - 61636 "HINFO IN 303672438323418135.7660371047021629632. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.027076714s
	
	
	==> coredns [7eb02b6b3e6acbfaa83db3a48ac65c331d99ca20916430c27ada8e7bfb90bc22] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47037 - 55541 "HINFO IN 7061574596687865819.4425809311748167007. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014568381s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-470021
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-470021
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=pause-470021
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T19_52_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 19:51:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-470021
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 19:53:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 19:52:50 +0000   Mon, 27 Oct 2025 19:51:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 19:52:50 +0000   Mon, 27 Oct 2025 19:51:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 19:52:50 +0000   Mon, 27 Oct 2025 19:51:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 19:52:50 +0000   Mon, 27 Oct 2025 19:52:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-470021
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                b793bb73-fce9-4fb4-b85b-85a8efd2e97d
	  Boot ID:                    23ea05b4-8203-4fb9-a84a-5deb4b091cbb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-nrzpx                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     78s
	  kube-system                 etcd-pause-470021                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         85s
	  kube-system                 kindnet-czq4c                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      78s
	  kube-system                 kube-apiserver-pause-470021             250m (12%)    0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-controller-manager-pause-470021    200m (10%)    0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-proxy-5tqdh                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-scheduler-pause-470021             100m (5%)     0 (0%)      0 (0%)           0 (0%)         84s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 76s                kube-proxy       
	  Normal   Starting                 17s                kube-proxy       
	  Normal   NodeHasSufficientPID     92s (x8 over 92s)  kubelet          Node pause-470021 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 92s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  92s (x8 over 92s)  kubelet          Node pause-470021 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    92s (x8 over 92s)  kubelet          Node pause-470021 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 92s                kubelet          Starting kubelet.
	  Normal   Starting                 84s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 84s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  84s                kubelet          Node pause-470021 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    84s                kubelet          Node pause-470021 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     84s                kubelet          Node pause-470021 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           79s                node-controller  Node pause-470021 event: Registered Node pause-470021 in Controller
	  Normal   NodeReady                36s                kubelet          Node pause-470021 status is now: NodeReady
	  Normal   RegisteredNode           15s                node-controller  Node pause-470021 event: Registered Node pause-470021 in Controller
	
	
	==> dmesg <==
	[Oct27 19:24] overlayfs: idmapped layers are currently not supported
	[Oct27 19:25] overlayfs: idmapped layers are currently not supported
	[Oct27 19:26] overlayfs: idmapped layers are currently not supported
	[  +3.069263] overlayfs: idmapped layers are currently not supported
	[Oct27 19:27] overlayfs: idmapped layers are currently not supported
	[ +40.518952] overlayfs: idmapped layers are currently not supported
	[Oct27 19:29] overlayfs: idmapped layers are currently not supported
	[Oct27 19:34] overlayfs: idmapped layers are currently not supported
	[ +33.986700] overlayfs: idmapped layers are currently not supported
	[Oct27 19:36] overlayfs: idmapped layers are currently not supported
	[Oct27 19:37] overlayfs: idmapped layers are currently not supported
	[Oct27 19:38] overlayfs: idmapped layers are currently not supported
	[Oct27 19:39] overlayfs: idmapped layers are currently not supported
	[Oct27 19:40] overlayfs: idmapped layers are currently not supported
	[  +7.015891] overlayfs: idmapped layers are currently not supported
	[Oct27 19:41] overlayfs: idmapped layers are currently not supported
	[ +28.455216] overlayfs: idmapped layers are currently not supported
	[Oct27 19:42] overlayfs: idmapped layers are currently not supported
	[ +26.490174] overlayfs: idmapped layers are currently not supported
	[Oct27 19:44] overlayfs: idmapped layers are currently not supported
	[Oct27 19:45] overlayfs: idmapped layers are currently not supported
	[Oct27 19:47] overlayfs: idmapped layers are currently not supported
	[Oct27 19:49] overlayfs: idmapped layers are currently not supported
	[ +31.410335] overlayfs: idmapped layers are currently not supported
	[Oct27 19:51] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5bab714cb0c4efef9b8229cf91fc7171e0d11a7652571bf103319a343da548c2] <==
	{"level":"warn","ts":"2025-10-27T19:51:57.628680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:51:57.639477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:51:57.662837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:51:57.703405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:51:57.763412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:51:57.765246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:51:57.869701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51930","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-27T19:52:54.455237Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-27T19:52:54.455290Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-470021","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-10-27T19:52:54.455386Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-27T19:52:54.604310Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-27T19:52:54.604398Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T19:52:54.604421Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-10-27T19:52:54.604457Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-27T19:52:54.604542Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-27T19:52:54.604576Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-27T19:52:54.604584Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T19:52:54.604566Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-27T19:52:54.604621Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-27T19:52:54.604673Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-27T19:52:54.604683Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T19:52:54.608027Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-10-27T19:52:54.608113Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T19:52:54.608155Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-27T19:52:54.608182Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-470021","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> etcd [bb5755505420e2c40bd2f92567f06c5b2bff9cc40d5286bd60d82a6f776ca771] <==
	{"level":"warn","ts":"2025-10-27T19:53:06.165091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.200705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.228159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.254922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.281014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.325373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.377690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.428382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.534101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.560045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.580129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.604425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.634104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.645837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.660031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.679882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.698812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.716153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.750907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.760289Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.777018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.804255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.830711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.841966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:53:06.944796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60838","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:53:26 up  2:35,  0 user,  load average: 2.23, 2.62, 2.26
	Linux pause-470021 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [669fb029456a2c87088307a9f1f18b0a214174bd7c9bb178f24f3ca5a7fc6e1b] <==
	I1027 19:53:03.025503       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 19:53:03.025741       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1027 19:53:03.025865       1 main.go:148] setting mtu 1500 for CNI 
	I1027 19:53:03.025876       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 19:53:03.025887       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T19:53:03Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 19:53:03.371322       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 19:53:03.371418       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 19:53:03.371430       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 19:53:03.372410       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1027 19:53:03.372754       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1027 19:53:03.372850       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1027 19:53:03.372965       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1027 19:53:03.373076       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1027 19:53:08.175517       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 19:53:08.175559       1 metrics.go:72] Registering metrics
	I1027 19:53:08.175610       1 controller.go:711] "Syncing nftables rules"
	I1027 19:53:13.360127       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 19:53:13.360307       1 main.go:301] handling current node
	I1027 19:53:23.360196       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 19:53:23.360242       1 main.go:301] handling current node
	
	
	==> kindnet [87b11ab122b25bab328b375d9d04001497127ff2585367e02e286374156d6569] <==
	I1027 19:52:09.638204       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 19:52:09.715328       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1027 19:52:09.715473       1 main.go:148] setting mtu 1500 for CNI 
	I1027 19:52:09.715492       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 19:52:09.715507       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T19:52:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 19:52:09.916266       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 19:52:09.916424       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 19:52:09.916436       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 19:52:09.917527       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1027 19:52:39.916263       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1027 19:52:39.917444       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1027 19:52:39.917542       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1027 19:52:39.917638       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1027 19:52:41.517001       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 19:52:41.517038       1 metrics.go:72] Registering metrics
	I1027 19:52:41.517105       1 controller.go:711] "Syncing nftables rules"
	I1027 19:52:49.920496       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 19:52:49.920551       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bdd6aaebbf9b1f7984851798a00219d0ab9df585fc4aa577a1bf0220aa1fd7fc] <==
	W1027 19:52:54.468805       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.468871       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.468910       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.468947       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.468982       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.469018       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.469057       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.469095       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.469135       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.469172       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.469276       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.470314       1 logging.go:55] [core] [Channel #25 SubChannel #27]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.470365       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.470405       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.470444       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.470482       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.470523       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.470686       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.470736       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.470779       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.471514       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.471584       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.471627       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.476222       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1027 19:52:54.476343       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [d58e48bacf1495d3c508b1e7222843a56c1e79dfedb59aeb62a1c6aad514d698] <==
	I1027 19:53:08.089513       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1027 19:53:08.089631       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1027 19:53:08.097106       1 aggregator.go:171] initial CRD sync complete...
	I1027 19:53:08.097201       1 autoregister_controller.go:144] Starting autoregister controller
	I1027 19:53:08.097249       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 19:53:08.107314       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1027 19:53:08.122549       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1027 19:53:08.122641       1 policy_source.go:240] refreshing policies
	I1027 19:53:08.153248       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1027 19:53:08.153554       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1027 19:53:08.157264       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1027 19:53:08.157381       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1027 19:53:08.157399       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1027 19:53:08.159364       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1027 19:53:08.160999       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 19:53:08.169413       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 19:53:08.177132       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1027 19:53:08.186929       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1027 19:53:08.197716       1 cache.go:39] Caches are synced for autoregister controller
	I1027 19:53:08.763719       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 19:53:10.142027       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 19:53:11.517509       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 19:53:11.756754       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 19:53:11.806645       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 19:53:11.858682       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [eb0212fd806c68f3148012bf7e975926ecbbcc8417725249917368a738dcb11d] <==
	I1027 19:52:07.171760       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 19:52:07.214327       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1027 19:52:07.216766       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1027 19:52:07.216943       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1027 19:52:07.219248       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 19:52:07.219367       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 19:52:07.219408       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 19:52:07.219508       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 19:52:07.219535       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 19:52:07.219591       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1027 19:52:07.219640       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1027 19:52:07.219515       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1027 19:52:07.219526       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1027 19:52:07.226318       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1027 19:52:07.226426       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1027 19:52:07.226473       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1027 19:52:07.226503       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1027 19:52:07.226530       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1027 19:52:07.227076       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 19:52:07.231038       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 19:52:07.231167       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 19:52:07.231258       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-470021"
	I1027 19:52:07.231338       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1027 19:52:07.247155       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-470021" podCIDRs=["10.244.0.0/24"]
	I1027 19:52:52.237622       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [edb8ab36eea95aa2a08a1d4c6d4048ccc588d84a29202156e960705fc0fa969e] <==
	I1027 19:53:11.502438       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1027 19:53:11.502524       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1027 19:53:11.502577       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1027 19:53:11.502640       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1027 19:53:11.503958       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 19:53:11.504733       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 19:53:11.505123       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-470021"
	I1027 19:53:11.505178       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1027 19:53:11.505807       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 19:53:11.507230       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 19:53:11.510233       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 19:53:11.511855       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 19:53:11.514976       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1027 19:53:11.517383       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 19:53:11.518688       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1027 19:53:11.520918       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 19:53:11.520931       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 19:53:11.523984       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1027 19:53:11.525226       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1027 19:53:11.526403       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 19:53:11.542736       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 19:53:11.549072       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 19:53:11.549168       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 19:53:11.549202       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1027 19:53:11.550306       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [336f9e0fad2e20cdf64c964f42b31d59f4633709021335d2e471a64c05518236] <==
	I1027 19:53:03.003442       1 server_linux.go:53] "Using iptables proxy"
	I1027 19:53:03.237935       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1027 19:53:03.239112       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dpause-470021&limit=500&resourceVersion=0\": dial tcp 192.168.85.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1027 19:53:08.238216       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 19:53:08.238362       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1027 19:53:08.238432       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 19:53:09.035066       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 19:53:09.035146       1 server_linux.go:132] "Using iptables Proxier"
	I1027 19:53:09.735224       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 19:53:09.735556       1 server.go:527] "Version info" version="v1.34.1"
	I1027 19:53:09.735579       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:53:09.739004       1 config.go:200] "Starting service config controller"
	I1027 19:53:09.739032       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 19:53:09.739057       1 config.go:106] "Starting endpoint slice config controller"
	I1027 19:53:09.739071       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 19:53:09.739196       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 19:53:09.739208       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 19:53:09.742085       1 config.go:309] "Starting node config controller"
	I1027 19:53:09.742100       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 19:53:09.742106       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 19:53:09.842413       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 19:53:09.842450       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 19:53:09.842489       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [69a0468ab45ef90113638021bffb192716f99ab5157428ccbd881c54365dd32d] <==
	I1027 19:52:10.241839       1 server_linux.go:53] "Using iptables proxy"
	I1027 19:52:10.317024       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 19:52:10.417817       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 19:52:10.417851       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1027 19:52:10.417958       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 19:52:10.437023       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 19:52:10.437078       1 server_linux.go:132] "Using iptables Proxier"
	I1027 19:52:10.440949       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 19:52:10.441242       1 server.go:527] "Version info" version="v1.34.1"
	I1027 19:52:10.441311       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:52:10.445503       1 config.go:200] "Starting service config controller"
	I1027 19:52:10.445532       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 19:52:10.446185       1 config.go:106] "Starting endpoint slice config controller"
	I1027 19:52:10.446205       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 19:52:10.447606       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 19:52:10.447632       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 19:52:10.448092       1 config.go:309] "Starting node config controller"
	I1027 19:52:10.448137       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 19:52:10.448168       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 19:52:10.546111       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 19:52:10.548269       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 19:52:10.548275       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [9c65fbe4eff1527a24e959481045341035c8c2ea0c34900ec330446573f13baf] <==
	E1027 19:52:00.801448       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 19:52:00.801942       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1027 19:52:00.804299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 19:52:00.804478       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 19:52:00.806452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 19:52:00.806723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 19:52:00.809187       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 19:52:00.809760       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 19:52:00.809785       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 19:52:00.809843       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 19:52:00.809934       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1027 19:52:00.809947       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 19:52:00.810026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 19:52:00.810092       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 19:52:00.810137       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 19:52:00.810267       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 19:52:00.810326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 19:52:00.810820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1027 19:52:01.987808       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:52:54.461964       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1027 19:52:54.462013       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1027 19:52:54.462032       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1027 19:52:54.462054       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:52:54.462261       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1027 19:52:54.462285       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c384025bdb6e4e5fd0f211129c90fa79ae691468550d2be99ba79f5cff89e73e] <==
	I1027 19:53:07.432436       1 serving.go:386] Generated self-signed cert in-memory
	I1027 19:53:09.437324       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 19:53:09.437352       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:53:09.446016       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 19:53:09.449212       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1027 19:53:09.449255       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1027 19:53:09.449288       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 19:53:09.463850       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:53:09.463959       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:53:09.464006       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 19:53:09.464280       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 19:53:09.550446       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1027 19:53:09.565413       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 19:53:09.565537       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 19:53:02 pause-470021 kubelet[1314]: E1027 19:53:02.971224    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-470021\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="9db0def9482bee08a3927a69a0e172a2" pod="kube-system/kube-apiserver-pause-470021"
	Oct 27 19:53:02 pause-470021 kubelet[1314]: E1027 19:53:02.982552    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-470021\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="b34aaf5d95741c4a53031f9f12fa5cc2" pod="kube-system/kube-scheduler-pause-470021"
	Oct 27 19:53:02 pause-470021 kubelet[1314]: E1027 19:53:02.983089    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-470021\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="9db0def9482bee08a3927a69a0e172a2" pod="kube-system/kube-apiserver-pause-470021"
	Oct 27 19:53:02 pause-470021 kubelet[1314]: E1027 19:53:02.983558    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-470021\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="02116935838487690dbac84a98c92f2e" pod="kube-system/etcd-pause-470021"
	Oct 27 19:53:02 pause-470021 kubelet[1314]: E1027 19:53:02.984095    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-470021\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="c51896afdd83921e4a292cc17c927160" pod="kube-system/kube-controller-manager-pause-470021"
	Oct 27 19:53:02 pause-470021 kubelet[1314]: E1027 19:53:02.984414    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-czq4c\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="0b877aea-545c-4196-abcc-1c1856b6e3cb" pod="kube-system/kindnet-czq4c"
	Oct 27 19:53:02 pause-470021 kubelet[1314]: E1027 19:53:02.984773    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5tqdh\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="096a2e44-c862-4412-a1d6-080237dfc726" pod="kube-system/kube-proxy-5tqdh"
	Oct 27 19:53:02 pause-470021 kubelet[1314]: E1027 19:53:02.985086    1314 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-nrzpx\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="1ee91970-2f04-4fd7-b25b-8939d1ac7bd0" pod="kube-system/coredns-66bc5c9577-nrzpx"
	Oct 27 19:53:07 pause-470021 kubelet[1314]: E1027 19:53:07.854523    1314 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-470021\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-470021' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Oct 27 19:53:07 pause-470021 kubelet[1314]: E1027 19:53:07.855311    1314 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-470021\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-470021' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 27 19:53:07 pause-470021 kubelet[1314]: E1027 19:53:07.855584    1314 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-470021\" is forbidden: User \"system:node:pause-470021\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-470021' and this object" podUID="9db0def9482bee08a3927a69a0e172a2" pod="kube-system/kube-apiserver-pause-470021"
	Oct 27 19:53:07 pause-470021 kubelet[1314]: E1027 19:53:07.894002    1314 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-470021\" is forbidden: User \"system:node:pause-470021\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-470021' and this object" podUID="02116935838487690dbac84a98c92f2e" pod="kube-system/etcd-pause-470021"
	Oct 27 19:53:07 pause-470021 kubelet[1314]: E1027 19:53:07.964399    1314 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-470021\" is forbidden: User \"system:node:pause-470021\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-470021' and this object" podUID="c51896afdd83921e4a292cc17c927160" pod="kube-system/kube-controller-manager-pause-470021"
	Oct 27 19:53:08 pause-470021 kubelet[1314]: E1027 19:53:08.002702    1314 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-czq4c\" is forbidden: User \"system:node:pause-470021\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-470021' and this object" podUID="0b877aea-545c-4196-abcc-1c1856b6e3cb" pod="kube-system/kindnet-czq4c"
	Oct 27 19:53:08 pause-470021 kubelet[1314]: E1027 19:53:08.012844    1314 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-5tqdh\" is forbidden: User \"system:node:pause-470021\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-470021' and this object" podUID="096a2e44-c862-4412-a1d6-080237dfc726" pod="kube-system/kube-proxy-5tqdh"
	Oct 27 19:53:08 pause-470021 kubelet[1314]: E1027 19:53:08.025220    1314 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-nrzpx\" is forbidden: User \"system:node:pause-470021\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-470021' and this object" podUID="1ee91970-2f04-4fd7-b25b-8939d1ac7bd0" pod="kube-system/coredns-66bc5c9577-nrzpx"
	Oct 27 19:53:08 pause-470021 kubelet[1314]: E1027 19:53:08.049908    1314 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-470021\" is forbidden: User \"system:node:pause-470021\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-470021' and this object" podUID="b34aaf5d95741c4a53031f9f12fa5cc2" pod="kube-system/kube-scheduler-pause-470021"
	Oct 27 19:53:08 pause-470021 kubelet[1314]: E1027 19:53:08.104116    1314 status_manager.go:1018] "Failed to get status for pod" err=<
	Oct 27 19:53:08 pause-470021 kubelet[1314]:         pods "kindnet-czq4c" is forbidden: User "system:node:pause-470021" cannot get resource "pods" in API group "" in the namespace "kube-system": no relationship found between node 'pause-470021' and this object
	Oct 27 19:53:08 pause-470021 kubelet[1314]:         RBAC: [role.rbac.authorization.k8s.io "kubeadm:kubelet-config" not found, role.rbac.authorization.k8s.io "kubeadm:nodes-kubeadm-config" not found]
	Oct 27 19:53:08 pause-470021 kubelet[1314]:  > podUID="0b877aea-545c-4196-abcc-1c1856b6e3cb" pod="kube-system/kindnet-czq4c"
	Oct 27 19:53:12 pause-470021 kubelet[1314]: W1027 19:53:12.852489    1314 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 27 19:53:21 pause-470021 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 19:53:22 pause-470021 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 19:53:22 pause-470021 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-470021 -n pause-470021
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-470021 -n pause-470021: exit status 2 (394.611904ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-470021 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-942644 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-942644 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (261.841547ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:58:08Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-942644 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-942644 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-942644 describe deploy/metrics-server -n kube-system: exit status 1 (78.925044ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-942644 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-942644
helpers_test.go:243: (dbg) docker inspect old-k8s-version-942644:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "10950a3c65bf121769a3f4633bb765a9cdb0464d8195fe6bebdec626b5deca5a",
	        "Created": "2025-10-27T19:57:02.220286943Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 445516,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T19:57:02.32123737Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/10950a3c65bf121769a3f4633bb765a9cdb0464d8195fe6bebdec626b5deca5a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/10950a3c65bf121769a3f4633bb765a9cdb0464d8195fe6bebdec626b5deca5a/hostname",
	        "HostsPath": "/var/lib/docker/containers/10950a3c65bf121769a3f4633bb765a9cdb0464d8195fe6bebdec626b5deca5a/hosts",
	        "LogPath": "/var/lib/docker/containers/10950a3c65bf121769a3f4633bb765a9cdb0464d8195fe6bebdec626b5deca5a/10950a3c65bf121769a3f4633bb765a9cdb0464d8195fe6bebdec626b5deca5a-json.log",
	        "Name": "/old-k8s-version-942644",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-942644:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-942644",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "10950a3c65bf121769a3f4633bb765a9cdb0464d8195fe6bebdec626b5deca5a",
	                "LowerDir": "/var/lib/docker/overlay2/ac29eb6ae0b9a6af31ee0c5db23452fde2f2593679f1a0a32d72317c10177f51-init/diff:/var/lib/docker/overlay2/0567218bbe05e8b59b2aef7ad82032d6716040c1f9d9d73e7de8682101079f76/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ac29eb6ae0b9a6af31ee0c5db23452fde2f2593679f1a0a32d72317c10177f51/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ac29eb6ae0b9a6af31ee0c5db23452fde2f2593679f1a0a32d72317c10177f51/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ac29eb6ae0b9a6af31ee0c5db23452fde2f2593679f1a0a32d72317c10177f51/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-942644",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-942644/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-942644",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-942644",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-942644",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6501eb0f282626c97c200eb37c4815315653658122313f4f09be597bd568e662",
	            "SandboxKey": "/var/run/docker/netns/6501eb0f2826",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33408"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33409"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33412"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33410"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33411"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-942644": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:65:3b:c9:90:72",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7843814e6694ec1a5f4ec1f5c9fd29cf174989ede4bbc78e0ebce293c1be9090",
	                    "EndpointID": "272c9c9a7e783e603a2dddbfbb38667d36075c331b4d834c3ab8cafede2cae14",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-942644",
	                        "10950a3c65bf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-942644 -n old-k8s-version-942644
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-942644 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-942644 logs -n 25: (1.176035652s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-750423 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ cilium-750423             │ jenkins │ v1.37.0 │ 27 Oct 25 19:54 UTC │                     │
	│ ssh     │ -p cilium-750423 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-750423             │ jenkins │ v1.37.0 │ 27 Oct 25 19:54 UTC │                     │
	│ ssh     │ -p cilium-750423 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-750423             │ jenkins │ v1.37.0 │ 27 Oct 25 19:54 UTC │                     │
	│ ssh     │ -p cilium-750423 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-750423             │ jenkins │ v1.37.0 │ 27 Oct 25 19:54 UTC │                     │
	│ ssh     │ -p cilium-750423 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-750423             │ jenkins │ v1.37.0 │ 27 Oct 25 19:54 UTC │                     │
	│ ssh     │ -p cilium-750423 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-750423             │ jenkins │ v1.37.0 │ 27 Oct 25 19:54 UTC │                     │
	│ ssh     │ -p cilium-750423 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-750423             │ jenkins │ v1.37.0 │ 27 Oct 25 19:54 UTC │                     │
	│ ssh     │ -p cilium-750423 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-750423             │ jenkins │ v1.37.0 │ 27 Oct 25 19:54 UTC │                     │
	│ ssh     │ -p cilium-750423 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-750423             │ jenkins │ v1.37.0 │ 27 Oct 25 19:54 UTC │                     │
	│ ssh     │ -p cilium-750423 sudo containerd config dump                                                                                                                                                                                                  │ cilium-750423             │ jenkins │ v1.37.0 │ 27 Oct 25 19:54 UTC │                     │
	│ ssh     │ -p cilium-750423 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-750423             │ jenkins │ v1.37.0 │ 27 Oct 25 19:54 UTC │                     │
	│ ssh     │ -p cilium-750423 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-750423             │ jenkins │ v1.37.0 │ 27 Oct 25 19:54 UTC │                     │
	│ ssh     │ -p cilium-750423 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-750423             │ jenkins │ v1.37.0 │ 27 Oct 25 19:54 UTC │                     │
	│ ssh     │ -p cilium-750423 sudo crio config                                                                                                                                                                                                             │ cilium-750423             │ jenkins │ v1.37.0 │ 27 Oct 25 19:54 UTC │                     │
	│ delete  │ -p cilium-750423                                                                                                                                                                                                                              │ cilium-750423             │ jenkins │ v1.37.0 │ 27 Oct 25 19:54 UTC │ 27 Oct 25 19:54 UTC │
	│ start   │ -p force-systemd-env-105360 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-105360  │ jenkins │ v1.37.0 │ 27 Oct 25 19:54 UTC │ 27 Oct 25 19:55 UTC │
	│ delete  │ -p force-systemd-env-105360                                                                                                                                                                                                                   │ force-systemd-env-105360  │ jenkins │ v1.37.0 │ 27 Oct 25 19:55 UTC │ 27 Oct 25 19:55 UTC │
	│ start   │ -p cert-expiration-280013 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-280013    │ jenkins │ v1.37.0 │ 27 Oct 25 19:55 UTC │ 27 Oct 25 19:55 UTC │
	│ delete  │ -p kubernetes-upgrade-524430                                                                                                                                                                                                                  │ kubernetes-upgrade-524430 │ jenkins │ v1.37.0 │ 27 Oct 25 19:56 UTC │ 27 Oct 25 19:56 UTC │
	│ start   │ -p cert-options-319273 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-319273       │ jenkins │ v1.37.0 │ 27 Oct 25 19:56 UTC │ 27 Oct 25 19:56 UTC │
	│ ssh     │ cert-options-319273 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-319273       │ jenkins │ v1.37.0 │ 27 Oct 25 19:56 UTC │ 27 Oct 25 19:56 UTC │
	│ ssh     │ -p cert-options-319273 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-319273       │ jenkins │ v1.37.0 │ 27 Oct 25 19:56 UTC │ 27 Oct 25 19:56 UTC │
	│ delete  │ -p cert-options-319273                                                                                                                                                                                                                        │ cert-options-319273       │ jenkins │ v1.37.0 │ 27 Oct 25 19:56 UTC │ 27 Oct 25 19:56 UTC │
	│ start   │ -p old-k8s-version-942644 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-942644    │ jenkins │ v1.37.0 │ 27 Oct 25 19:56 UTC │ 27 Oct 25 19:57 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-942644 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-942644    │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 19:56:56
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 19:56:56.133083  445124 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:56:56.133279  445124 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:56:56.133312  445124 out.go:374] Setting ErrFile to fd 2...
	I1027 19:56:56.133337  445124 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:56:56.133589  445124 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 19:56:56.134035  445124 out.go:368] Setting JSON to false
	I1027 19:56:56.134959  445124 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9569,"bootTime":1761585448,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1027 19:56:56.135100  445124 start.go:141] virtualization:  
	I1027 19:56:56.139068  445124 out.go:179] * [old-k8s-version-942644] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 19:56:56.143595  445124 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 19:56:56.143727  445124 notify.go:220] Checking for updates...
	I1027 19:56:56.150363  445124 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 19:56:56.153669  445124 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 19:56:56.156891  445124 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-266035/.minikube
	I1027 19:56:56.160049  445124 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 19:56:56.163162  445124 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 19:56:56.166730  445124 config.go:182] Loaded profile config "cert-expiration-280013": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:56:56.166912  445124 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 19:56:56.197089  445124 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 19:56:56.197282  445124 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:56:56.256007  445124 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-27 19:56:56.24691691 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 19:56:56.256122  445124 docker.go:318] overlay module found
	I1027 19:56:56.261162  445124 out.go:179] * Using the docker driver based on user configuration
	I1027 19:56:56.264108  445124 start.go:305] selected driver: docker
	I1027 19:56:56.264132  445124 start.go:925] validating driver "docker" against <nil>
	I1027 19:56:56.264146  445124 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 19:56:56.264875  445124 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:56:56.319293  445124 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-27 19:56:56.310261291 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 19:56:56.319459  445124 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1027 19:56:56.319683  445124 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 19:56:56.322693  445124 out.go:179] * Using Docker driver with root privileges
	I1027 19:56:56.325585  445124 cni.go:84] Creating CNI manager for ""
	I1027 19:56:56.325655  445124 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:56:56.325670  445124 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 19:56:56.325747  445124 start.go:349] cluster config:
	{Name:old-k8s-version-942644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-942644 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:56:56.330695  445124 out.go:179] * Starting "old-k8s-version-942644" primary control-plane node in "old-k8s-version-942644" cluster
	I1027 19:56:56.333622  445124 cache.go:123] Beginning downloading kic base image for docker with crio
	I1027 19:56:56.336552  445124 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 19:56:56.339480  445124 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 19:56:56.339636  445124 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1027 19:56:56.339670  445124 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1027 19:56:56.339683  445124 cache.go:58] Caching tarball of preloaded images
	I1027 19:56:56.339756  445124 preload.go:233] Found /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1027 19:56:56.339778  445124 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1027 19:56:56.339890  445124 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/config.json ...
	I1027 19:56:56.339915  445124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/config.json: {Name:mk54e69297db4a5d1cebaec2bd2e1a3dd09b025c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:56:56.366000  445124 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 19:56:56.366030  445124 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 19:56:56.366049  445124 cache.go:232] Successfully downloaded all kic artifacts
	I1027 19:56:56.366072  445124 start.go:360] acquireMachinesLock for old-k8s-version-942644: {Name:mkd5c1f7277bdb6a653be1420f80ffe4d64c962c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:56:56.366182  445124 start.go:364] duration metric: took 89.269µs to acquireMachinesLock for "old-k8s-version-942644"
	I1027 19:56:56.366211  445124 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-942644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-942644 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 19:56:56.366285  445124 start.go:125] createHost starting for "" (driver="docker")
	I1027 19:56:56.369615  445124 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1027 19:56:56.369848  445124 start.go:159] libmachine.API.Create for "old-k8s-version-942644" (driver="docker")
	I1027 19:56:56.369897  445124 client.go:168] LocalClient.Create starting
	I1027 19:56:56.369994  445124 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem
	I1027 19:56:56.370031  445124 main.go:141] libmachine: Decoding PEM data...
	I1027 19:56:56.370046  445124 main.go:141] libmachine: Parsing certificate...
	I1027 19:56:56.370096  445124 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem
	I1027 19:56:56.370112  445124 main.go:141] libmachine: Decoding PEM data...
	I1027 19:56:56.370125  445124 main.go:141] libmachine: Parsing certificate...
	I1027 19:56:56.370494  445124 cli_runner.go:164] Run: docker network inspect old-k8s-version-942644 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1027 19:56:56.387099  445124 cli_runner.go:211] docker network inspect old-k8s-version-942644 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1027 19:56:56.387196  445124 network_create.go:284] running [docker network inspect old-k8s-version-942644] to gather additional debugging logs...
	I1027 19:56:56.387218  445124 cli_runner.go:164] Run: docker network inspect old-k8s-version-942644
	W1027 19:56:56.402904  445124 cli_runner.go:211] docker network inspect old-k8s-version-942644 returned with exit code 1
	I1027 19:56:56.402935  445124 network_create.go:287] error running [docker network inspect old-k8s-version-942644]: docker network inspect old-k8s-version-942644: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-942644 not found
	I1027 19:56:56.402950  445124 network_create.go:289] output of [docker network inspect old-k8s-version-942644]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-942644 not found
	
	** /stderr **
	I1027 19:56:56.403070  445124 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 19:56:56.419756  445124 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-74ee89127400 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:aa:c7:29:bd:7a:a9} reservation:<nil>}
	I1027 19:56:56.420208  445124 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-9c57ca829ac8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:0a:24:08:53:d6:07} reservation:<nil>}
	I1027 19:56:56.420446  445124 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b7a7c45fd176 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f6:52:e6:cd:c8:a6} reservation:<nil>}
	I1027 19:56:56.420874  445124 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a31400}
	I1027 19:56:56.420903  445124 network_create.go:124] attempt to create docker network old-k8s-version-942644 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1027 19:56:56.420959  445124 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-942644 old-k8s-version-942644
	I1027 19:56:56.481318  445124 network_create.go:108] docker network old-k8s-version-942644 192.168.76.0/24 created
	I1027 19:56:56.481379  445124 kic.go:121] calculated static IP "192.168.76.2" for the "old-k8s-version-942644" container
	I1027 19:56:56.481475  445124 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1027 19:56:56.499209  445124 cli_runner.go:164] Run: docker volume create old-k8s-version-942644 --label name.minikube.sigs.k8s.io=old-k8s-version-942644 --label created_by.minikube.sigs.k8s.io=true
	I1027 19:56:56.517940  445124 oci.go:103] Successfully created a docker volume old-k8s-version-942644
	I1027 19:56:56.518033  445124 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-942644-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-942644 --entrypoint /usr/bin/test -v old-k8s-version-942644:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1027 19:56:57.040810  445124 oci.go:107] Successfully prepared a docker volume old-k8s-version-942644
	I1027 19:56:57.040865  445124 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1027 19:56:57.040886  445124 kic.go:194] Starting extracting preloaded images to volume ...
	I1027 19:56:57.040970  445124 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-942644:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1027 19:57:02.143322  445124 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-942644:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.102313549s)
	I1027 19:57:02.143356  445124 kic.go:203] duration metric: took 5.102466456s to extract preloaded images to volume ...
	W1027 19:57:02.143494  445124 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1027 19:57:02.143599  445124 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1027 19:57:02.203949  445124 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-942644 --name old-k8s-version-942644 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-942644 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-942644 --network old-k8s-version-942644 --ip 192.168.76.2 --volume old-k8s-version-942644:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1027 19:57:02.559360  445124 cli_runner.go:164] Run: docker container inspect old-k8s-version-942644 --format={{.State.Running}}
	I1027 19:57:02.579660  445124 cli_runner.go:164] Run: docker container inspect old-k8s-version-942644 --format={{.State.Status}}
	I1027 19:57:02.606224  445124 cli_runner.go:164] Run: docker exec old-k8s-version-942644 stat /var/lib/dpkg/alternatives/iptables
	I1027 19:57:02.673319  445124 oci.go:144] the created container "old-k8s-version-942644" has a running status.
	I1027 19:57:02.673356  445124 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21801-266035/.minikube/machines/old-k8s-version-942644/id_rsa...
	I1027 19:57:03.113809  445124 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21801-266035/.minikube/machines/old-k8s-version-942644/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1027 19:57:03.137134  445124 cli_runner.go:164] Run: docker container inspect old-k8s-version-942644 --format={{.State.Status}}
	I1027 19:57:03.154340  445124 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1027 19:57:03.154365  445124 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-942644 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1027 19:57:03.195709  445124 cli_runner.go:164] Run: docker container inspect old-k8s-version-942644 --format={{.State.Status}}
	I1027 19:57:03.213798  445124 machine.go:93] provisionDockerMachine start ...
	I1027 19:57:03.213906  445124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-942644
	I1027 19:57:03.231901  445124 main.go:141] libmachine: Using SSH client type: native
	I1027 19:57:03.232252  445124 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1027 19:57:03.232268  445124 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 19:57:03.232908  445124 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1027 19:57:06.382649  445124 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-942644
	
	I1027 19:57:06.382675  445124 ubuntu.go:182] provisioning hostname "old-k8s-version-942644"
	I1027 19:57:06.382740  445124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-942644
	I1027 19:57:06.400069  445124 main.go:141] libmachine: Using SSH client type: native
	I1027 19:57:06.400386  445124 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1027 19:57:06.400402  445124 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-942644 && echo "old-k8s-version-942644" | sudo tee /etc/hostname
	I1027 19:57:06.555386  445124 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-942644
	
	I1027 19:57:06.555565  445124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-942644
	I1027 19:57:06.572035  445124 main.go:141] libmachine: Using SSH client type: native
	I1027 19:57:06.572346  445124 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1027 19:57:06.572369  445124 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-942644' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-942644/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-942644' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 19:57:06.723113  445124 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 19:57:06.723140  445124 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21801-266035/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-266035/.minikube}
	I1027 19:57:06.723166  445124 ubuntu.go:190] setting up certificates
	I1027 19:57:06.723176  445124 provision.go:84] configureAuth start
	I1027 19:57:06.723238  445124 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-942644
	I1027 19:57:06.739679  445124 provision.go:143] copyHostCerts
	I1027 19:57:06.739745  445124 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem, removing ...
	I1027 19:57:06.739790  445124 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem
	I1027 19:57:06.739876  445124 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem (1675 bytes)
	I1027 19:57:06.740043  445124 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem, removing ...
	I1027 19:57:06.740056  445124 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem
	I1027 19:57:06.740091  445124 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem (1078 bytes)
	I1027 19:57:06.740167  445124 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem, removing ...
	I1027 19:57:06.740177  445124 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem
	I1027 19:57:06.740208  445124 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem (1123 bytes)
	I1027 19:57:06.740274  445124 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-942644 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-942644]
	I1027 19:57:07.562050  445124 provision.go:177] copyRemoteCerts
	I1027 19:57:07.562123  445124 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 19:57:07.562166  445124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-942644
	I1027 19:57:07.579516  445124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/old-k8s-version-942644/id_rsa Username:docker}
	I1027 19:57:07.682564  445124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 19:57:07.700155  445124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1027 19:57:07.717629  445124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1027 19:57:07.735768  445124 provision.go:87] duration metric: took 1.012562738s to configureAuth
	I1027 19:57:07.735797  445124 ubuntu.go:206] setting minikube options for container-runtime
	I1027 19:57:07.735985  445124 config.go:182] Loaded profile config "old-k8s-version-942644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1027 19:57:07.736091  445124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-942644
	I1027 19:57:07.758106  445124 main.go:141] libmachine: Using SSH client type: native
	I1027 19:57:07.758415  445124 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1027 19:57:07.758436  445124 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 19:57:08.015905  445124 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 19:57:08.015975  445124 machine.go:96] duration metric: took 4.802148282s to provisionDockerMachine
	I1027 19:57:08.015994  445124 client.go:171] duration metric: took 11.646084107s to LocalClient.Create
	I1027 19:57:08.016015  445124 start.go:167] duration metric: took 11.646167649s to libmachine.API.Create "old-k8s-version-942644"
	I1027 19:57:08.016022  445124 start.go:293] postStartSetup for "old-k8s-version-942644" (driver="docker")
	I1027 19:57:08.016033  445124 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 19:57:08.016133  445124 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 19:57:08.016186  445124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-942644
	I1027 19:57:08.034472  445124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/old-k8s-version-942644/id_rsa Username:docker}
	I1027 19:57:08.138705  445124 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 19:57:08.141926  445124 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 19:57:08.141995  445124 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 19:57:08.142013  445124 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-266035/.minikube/addons for local assets ...
	I1027 19:57:08.142070  445124 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-266035/.minikube/files for local assets ...
	I1027 19:57:08.142156  445124 filesync.go:149] local asset: /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem -> 2678802.pem in /etc/ssl/certs
	I1027 19:57:08.142268  445124 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 19:57:08.149338  445124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem --> /etc/ssl/certs/2678802.pem (1708 bytes)
	I1027 19:57:08.166174  445124 start.go:296] duration metric: took 150.136871ms for postStartSetup
	I1027 19:57:08.166584  445124 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-942644
	I1027 19:57:08.182669  445124 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/config.json ...
	I1027 19:57:08.182941  445124 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 19:57:08.183047  445124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-942644
	I1027 19:57:08.200032  445124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/old-k8s-version-942644/id_rsa Username:docker}
	I1027 19:57:08.299951  445124 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 19:57:08.304272  445124 start.go:128] duration metric: took 11.93797242s to createHost
	I1027 19:57:08.304296  445124 start.go:83] releasing machines lock for "old-k8s-version-942644", held for 11.938101189s
	I1027 19:57:08.304362  445124 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-942644
	I1027 19:57:08.321547  445124 ssh_runner.go:195] Run: cat /version.json
	I1027 19:57:08.321601  445124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-942644
	I1027 19:57:08.321831  445124 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 19:57:08.321899  445124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-942644
	I1027 19:57:08.344537  445124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/old-k8s-version-942644/id_rsa Username:docker}
	I1027 19:57:08.347149  445124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/old-k8s-version-942644/id_rsa Username:docker}
	I1027 19:57:08.549097  445124 ssh_runner.go:195] Run: systemctl --version
	I1027 19:57:08.555412  445124 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 19:57:08.590229  445124 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 19:57:08.594711  445124 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 19:57:08.594852  445124 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 19:57:08.628936  445124 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1027 19:57:08.628957  445124 start.go:495] detecting cgroup driver to use...
	I1027 19:57:08.628988  445124 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1027 19:57:08.629040  445124 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 19:57:08.647292  445124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 19:57:08.659847  445124 docker.go:218] disabling cri-docker service (if available) ...
	I1027 19:57:08.659985  445124 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 19:57:08.677278  445124 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 19:57:08.695917  445124 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 19:57:08.822395  445124 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 19:57:08.947362  445124 docker.go:234] disabling docker service ...
	I1027 19:57:08.947475  445124 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 19:57:08.969423  445124 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 19:57:08.985368  445124 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 19:57:09.108288  445124 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 19:57:09.251297  445124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 19:57:09.266620  445124 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 19:57:09.281815  445124 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1027 19:57:09.281901  445124 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:57:09.292116  445124 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 19:57:09.292195  445124 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:57:09.301600  445124 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:57:09.310500  445124 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:57:09.319931  445124 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 19:57:09.328475  445124 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:57:09.337173  445124 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:57:09.350677  445124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:57:09.359542  445124 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 19:57:09.367350  445124 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 19:57:09.374927  445124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:57:09.504636  445124 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 19:57:09.640365  445124 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 19:57:09.640445  445124 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 19:57:09.644637  445124 start.go:563] Will wait 60s for crictl version
	I1027 19:57:09.644719  445124 ssh_runner.go:195] Run: which crictl
	I1027 19:57:09.648590  445124 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 19:57:09.673778  445124 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 19:57:09.673868  445124 ssh_runner.go:195] Run: crio --version
	I1027 19:57:09.704476  445124 ssh_runner.go:195] Run: crio --version
	I1027 19:57:09.738805  445124 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1027 19:57:09.741815  445124 cli_runner.go:164] Run: docker network inspect old-k8s-version-942644 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 19:57:09.758090  445124 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1027 19:57:09.762156  445124 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 19:57:09.778700  445124 kubeadm.go:883] updating cluster {Name:old-k8s-version-942644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-942644 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 19:57:09.778820  445124 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1027 19:57:09.778889  445124 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 19:57:09.810468  445124 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 19:57:09.810488  445124 crio.go:433] Images already preloaded, skipping extraction
	I1027 19:57:09.810555  445124 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 19:57:09.840350  445124 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 19:57:09.840372  445124 cache_images.go:85] Images are preloaded, skipping loading
	I1027 19:57:09.840381  445124 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1027 19:57:09.840507  445124 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-942644 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-942644 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 19:57:09.840587  445124 ssh_runner.go:195] Run: crio config
	I1027 19:57:09.895186  445124 cni.go:84] Creating CNI manager for ""
	I1027 19:57:09.895209  445124 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:57:09.895227  445124 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 19:57:09.895251  445124 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-942644 NodeName:old-k8s-version-942644 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 19:57:09.895392  445124 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-942644"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 19:57:09.895467  445124 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1027 19:57:09.903079  445124 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 19:57:09.903189  445124 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 19:57:09.910417  445124 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1027 19:57:09.922436  445124 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 19:57:09.935899  445124 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1027 19:57:09.948963  445124 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1027 19:57:09.952793  445124 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 19:57:09.963476  445124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:57:10.085061  445124 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:57:10.102964  445124 certs.go:69] Setting up /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644 for IP: 192.168.76.2
	I1027 19:57:10.103106  445124 certs.go:195] generating shared ca certs ...
	I1027 19:57:10.103140  445124 certs.go:227] acquiring lock for ca certs: {Name:mk172548fc54811b51040cc0201d01eb4b3dd19b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:57:10.103334  445124 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21801-266035/.minikube/ca.key
	I1027 19:57:10.103420  445124 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.key
	I1027 19:57:10.103447  445124 certs.go:257] generating profile certs ...
	I1027 19:57:10.103530  445124 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/client.key
	I1027 19:57:10.103568  445124 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/client.crt with IP's: []
	I1027 19:57:10.651745  445124 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/client.crt ...
	I1027 19:57:10.651781  445124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/client.crt: {Name:mkf497e88e5d272897b7d7adf0a1a9b9e8047dc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:57:10.651979  445124 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/client.key ...
	I1027 19:57:10.651994  445124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/client.key: {Name:mk51c907b983c5a2ca0a370cf5ce30e7c663746f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:57:10.652083  445124 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/apiserver.key.554bf563
	I1027 19:57:10.652106  445124 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/apiserver.crt.554bf563 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1027 19:57:11.269513  445124 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/apiserver.crt.554bf563 ...
	I1027 19:57:11.269544  445124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/apiserver.crt.554bf563: {Name:mkc2dcfa3f9a4e9b90bf791b7efa7d59ef7ceb8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:57:11.269732  445124 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/apiserver.key.554bf563 ...
	I1027 19:57:11.269746  445124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/apiserver.key.554bf563: {Name:mk077bb9d1ac1661cb32f07449cec9b15194ec2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:57:11.269831  445124 certs.go:382] copying /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/apiserver.crt.554bf563 -> /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/apiserver.crt
	I1027 19:57:11.269915  445124 certs.go:386] copying /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/apiserver.key.554bf563 -> /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/apiserver.key
	I1027 19:57:11.269972  445124 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/proxy-client.key
	I1027 19:57:11.269990  445124 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/proxy-client.crt with IP's: []
	I1027 19:57:11.868391  445124 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/proxy-client.crt ...
	I1027 19:57:11.868421  445124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/proxy-client.crt: {Name:mk5a131d64f7e24fb64468201eaf94d71d556573 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:57:11.868607  445124 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/proxy-client.key ...
	I1027 19:57:11.868621  445124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/proxy-client.key: {Name:mkb30f9fc1f2aaac3d460700d9c091f9dc8a9e40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:57:11.868810  445124 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880.pem (1338 bytes)
	W1027 19:57:11.868854  445124 certs.go:480] ignoring /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880_empty.pem, impossibly tiny 0 bytes
	I1027 19:57:11.868868  445124 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 19:57:11.868892  445124 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem (1078 bytes)
	I1027 19:57:11.868919  445124 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem (1123 bytes)
	I1027 19:57:11.868952  445124 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem (1675 bytes)
	I1027 19:57:11.868999  445124 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem (1708 bytes)
	I1027 19:57:11.869601  445124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 19:57:11.889093  445124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 19:57:11.907399  445124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 19:57:11.925830  445124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 19:57:11.944136  445124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1027 19:57:11.962671  445124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 19:57:11.980499  445124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 19:57:11.997680  445124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 19:57:12.020901  445124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem --> /usr/share/ca-certificates/2678802.pem (1708 bytes)
	I1027 19:57:12.041389  445124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 19:57:12.059872  445124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880.pem --> /usr/share/ca-certificates/267880.pem (1338 bytes)
	I1027 19:57:12.078565  445124 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 19:57:12.091665  445124 ssh_runner.go:195] Run: openssl version
	I1027 19:57:12.097704  445124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/267880.pem && ln -fs /usr/share/ca-certificates/267880.pem /etc/ssl/certs/267880.pem"
	I1027 19:57:12.105942  445124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/267880.pem
	I1027 19:57:12.109892  445124 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 19:04 /usr/share/ca-certificates/267880.pem
	I1027 19:57:12.110011  445124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/267880.pem
	I1027 19:57:12.151495  445124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/267880.pem /etc/ssl/certs/51391683.0"
	I1027 19:57:12.159505  445124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2678802.pem && ln -fs /usr/share/ca-certificates/2678802.pem /etc/ssl/certs/2678802.pem"
	I1027 19:57:12.167473  445124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2678802.pem
	I1027 19:57:12.171224  445124 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 19:04 /usr/share/ca-certificates/2678802.pem
	I1027 19:57:12.171302  445124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2678802.pem
	I1027 19:57:12.212483  445124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2678802.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 19:57:12.220620  445124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 19:57:12.228315  445124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:57:12.232167  445124 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:57:12.232236  445124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:57:12.272991  445124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 19:57:12.281354  445124 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 19:57:12.284701  445124 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 19:57:12.284770  445124 kubeadm.go:400] StartCluster: {Name:old-k8s-version-942644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-942644 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:57:12.284855  445124 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 19:57:12.284949  445124 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 19:57:12.318657  445124 cri.go:89] found id: ""
	I1027 19:57:12.318741  445124 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 19:57:12.330171  445124 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 19:57:12.338631  445124 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1027 19:57:12.338692  445124 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 19:57:12.349223  445124 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 19:57:12.349242  445124 kubeadm.go:157] found existing configuration files:
	
	I1027 19:57:12.349289  445124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 19:57:12.357825  445124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 19:57:12.357897  445124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 19:57:12.365946  445124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 19:57:12.375083  445124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 19:57:12.375146  445124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 19:57:12.382700  445124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 19:57:12.391163  445124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 19:57:12.391224  445124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 19:57:12.401149  445124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 19:57:12.408719  445124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 19:57:12.408810  445124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 19:57:12.416575  445124 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 19:57:12.469116  445124 kubeadm.go:318] [init] Using Kubernetes version: v1.28.0
	I1027 19:57:12.469234  445124 kubeadm.go:318] [preflight] Running pre-flight checks
	I1027 19:57:12.520702  445124 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1027 19:57:12.520801  445124 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1027 19:57:12.520857  445124 kubeadm.go:318] OS: Linux
	I1027 19:57:12.520920  445124 kubeadm.go:318] CGROUPS_CPU: enabled
	I1027 19:57:12.520989  445124 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1027 19:57:12.521082  445124 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1027 19:57:12.521159  445124 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1027 19:57:12.521248  445124 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1027 19:57:12.521331  445124 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1027 19:57:12.521425  445124 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1027 19:57:12.521536  445124 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1027 19:57:12.521630  445124 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1027 19:57:12.600854  445124 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 19:57:12.601006  445124 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 19:57:12.601112  445124 kubeadm.go:318] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1027 19:57:12.747908  445124 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 19:57:12.752973  445124 out.go:252]   - Generating certificates and keys ...
	I1027 19:57:12.753084  445124 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1027 19:57:12.753180  445124 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1027 19:57:13.344078  445124 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 19:57:13.546961  445124 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1027 19:57:14.459203  445124 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1027 19:57:14.613409  445124 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1027 19:57:14.956207  445124 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1027 19:57:14.956368  445124 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-942644] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1027 19:57:15.179337  445124 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1027 19:57:15.179807  445124 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-942644] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1027 19:57:15.689589  445124 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 19:57:16.239464  445124 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 19:57:16.804188  445124 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1027 19:57:16.804446  445124 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 19:57:17.413690  445124 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 19:57:18.254626  445124 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 19:57:18.544345  445124 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 19:57:18.916753  445124 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 19:57:18.917706  445124 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 19:57:18.920515  445124 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 19:57:18.924089  445124 out.go:252]   - Booting up control plane ...
	I1027 19:57:18.924196  445124 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 19:57:18.924278  445124 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 19:57:18.924360  445124 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 19:57:18.940459  445124 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 19:57:18.942233  445124 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 19:57:18.942287  445124 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1027 19:57:19.079672  445124 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1027 19:57:26.581221  445124 kubeadm.go:318] [apiclient] All control plane components are healthy after 7.503974 seconds
	I1027 19:57:26.581429  445124 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 19:57:26.598230  445124 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 19:57:27.128569  445124 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 19:57:27.128780  445124 kubeadm.go:318] [mark-control-plane] Marking the node old-k8s-version-942644 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 19:57:27.642592  445124 kubeadm.go:318] [bootstrap-token] Using token: slbjfv.pv04ageom75ufrbw
	I1027 19:57:27.645555  445124 out.go:252]   - Configuring RBAC rules ...
	I1027 19:57:27.645685  445124 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 19:57:27.650588  445124 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 19:57:27.660659  445124 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 19:57:27.665112  445124 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 19:57:27.669307  445124 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 19:57:27.676880  445124 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 19:57:27.690400  445124 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 19:57:28.014867  445124 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1027 19:57:28.085372  445124 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1027 19:57:28.085397  445124 kubeadm.go:318] 
	I1027 19:57:28.085458  445124 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1027 19:57:28.085470  445124 kubeadm.go:318] 
	I1027 19:57:28.085548  445124 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1027 19:57:28.085557  445124 kubeadm.go:318] 
	I1027 19:57:28.085582  445124 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1027 19:57:28.085645  445124 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 19:57:28.085699  445124 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 19:57:28.085707  445124 kubeadm.go:318] 
	I1027 19:57:28.085761  445124 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1027 19:57:28.085788  445124 kubeadm.go:318] 
	I1027 19:57:28.085841  445124 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 19:57:28.085851  445124 kubeadm.go:318] 
	I1027 19:57:28.085903  445124 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1027 19:57:28.085981  445124 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 19:57:28.086051  445124 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 19:57:28.086059  445124 kubeadm.go:318] 
	I1027 19:57:28.086154  445124 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 19:57:28.086234  445124 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1027 19:57:28.086243  445124 kubeadm.go:318] 
	I1027 19:57:28.086326  445124 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token slbjfv.pv04ageom75ufrbw \
	I1027 19:57:28.086436  445124 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:9473f077605f6866eb08c8a712af7c56a0deb8dc2a907ec8cbb4b8ae396e8f1f \
	I1027 19:57:28.086460  445124 kubeadm.go:318] 	--control-plane 
	I1027 19:57:28.086468  445124 kubeadm.go:318] 
	I1027 19:57:28.086552  445124 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1027 19:57:28.086560  445124 kubeadm.go:318] 
	I1027 19:57:28.086652  445124 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token slbjfv.pv04ageom75ufrbw \
	I1027 19:57:28.086757  445124 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:9473f077605f6866eb08c8a712af7c56a0deb8dc2a907ec8cbb4b8ae396e8f1f 
	I1027 19:57:28.092978  445124 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1027 19:57:28.093106  445124 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 19:57:28.093127  445124 cni.go:84] Creating CNI manager for ""
	I1027 19:57:28.093139  445124 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:57:28.096594  445124 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1027 19:57:28.099663  445124 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1027 19:57:28.109175  445124 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1027 19:57:28.109199  445124 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1027 19:57:28.144342  445124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 19:57:29.147248  445124 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.002866477s)
	I1027 19:57:29.147301  445124 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 19:57:29.147399  445124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:57:29.147444  445124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-942644 minikube.k8s.io/updated_at=2025_10_27T19_57_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f minikube.k8s.io/name=old-k8s-version-942644 minikube.k8s.io/primary=true
	I1027 19:57:29.330114  445124 ops.go:34] apiserver oom_adj: -16
	I1027 19:57:29.330233  445124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:57:29.831244  445124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:57:30.331096  445124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:57:30.831244  445124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:57:31.331047  445124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:57:31.831276  445124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:57:32.330948  445124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:57:32.830656  445124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:57:33.331061  445124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:57:33.831118  445124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:57:34.330397  445124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:57:34.830814  445124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:57:35.330468  445124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:57:35.831270  445124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:57:36.330352  445124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:57:36.830595  445124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:57:37.330901  445124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:57:37.830708  445124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:57:38.330620  445124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:57:38.830355  445124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:57:39.330381  445124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:57:39.831132  445124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:57:40.330430  445124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:57:40.831069  445124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:57:41.330362  445124 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:57:41.428549  445124 kubeadm.go:1113] duration metric: took 12.281202644s to wait for elevateKubeSystemPrivileges
	I1027 19:57:41.428581  445124 kubeadm.go:402] duration metric: took 29.143832081s to StartCluster
	I1027 19:57:41.428600  445124 settings.go:142] acquiring lock: {Name:mk49b9e2e9b7901c6b51e89b8b1e8ee7f9e88107 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:57:41.428667  445124 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 19:57:41.429684  445124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/kubeconfig: {Name:mk0a03a594f8d9f9199b1c7cff7c550cec8414c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:57:41.429914  445124 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 19:57:41.430011  445124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 19:57:41.430259  445124 config.go:182] Loaded profile config "old-k8s-version-942644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1027 19:57:41.430303  445124 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 19:57:41.430370  445124 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-942644"
	I1027 19:57:41.430387  445124 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-942644"
	I1027 19:57:41.430408  445124 host.go:66] Checking if "old-k8s-version-942644" exists ...
	I1027 19:57:41.430910  445124 cli_runner.go:164] Run: docker container inspect old-k8s-version-942644 --format={{.State.Status}}
	I1027 19:57:41.431179  445124 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-942644"
	I1027 19:57:41.431204  445124 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-942644"
	I1027 19:57:41.431458  445124 cli_runner.go:164] Run: docker container inspect old-k8s-version-942644 --format={{.State.Status}}
	I1027 19:57:41.433869  445124 out.go:179] * Verifying Kubernetes components...
	I1027 19:57:41.441782  445124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:57:41.472724  445124 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:57:41.474081  445124 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-942644"
	I1027 19:57:41.474118  445124 host.go:66] Checking if "old-k8s-version-942644" exists ...
	I1027 19:57:41.474537  445124 cli_runner.go:164] Run: docker container inspect old-k8s-version-942644 --format={{.State.Status}}
	I1027 19:57:41.477515  445124 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:57:41.477535  445124 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 19:57:41.477604  445124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-942644
	I1027 19:57:41.513757  445124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/old-k8s-version-942644/id_rsa Username:docker}
	I1027 19:57:41.521984  445124 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 19:57:41.522004  445124 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 19:57:41.522067  445124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-942644
	I1027 19:57:41.552648  445124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/old-k8s-version-942644/id_rsa Username:docker}
	I1027 19:57:41.779572  445124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 19:57:41.779692  445124 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:57:41.800590  445124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:57:41.866947  445124 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 19:57:42.642106  445124 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-942644" to be "Ready" ...
	I1027 19:57:42.642225  445124 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1027 19:57:43.019992  445124 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.219352816s)
	I1027 19:57:43.020453  445124 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.1534582s)
	I1027 19:57:43.052553  445124 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1027 19:57:43.055428  445124 addons.go:514] duration metric: took 1.625095027s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1027 19:57:43.146857  445124 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-942644" context rescaled to 1 replicas
	W1027 19:57:44.645434  445124 node_ready.go:57] node "old-k8s-version-942644" has "Ready":"False" status (will retry)
	W1027 19:57:47.146018  445124 node_ready.go:57] node "old-k8s-version-942644" has "Ready":"False" status (will retry)
	W1027 19:57:49.645025  445124 node_ready.go:57] node "old-k8s-version-942644" has "Ready":"False" status (will retry)
	W1027 19:57:51.645912  445124 node_ready.go:57] node "old-k8s-version-942644" has "Ready":"False" status (will retry)
	W1027 19:57:54.146172  445124 node_ready.go:57] node "old-k8s-version-942644" has "Ready":"False" status (will retry)
	I1027 19:57:55.147413  445124 node_ready.go:49] node "old-k8s-version-942644" is "Ready"
	I1027 19:57:55.147438  445124 node_ready.go:38] duration metric: took 12.50529888s for node "old-k8s-version-942644" to be "Ready" ...
	I1027 19:57:55.147451  445124 api_server.go:52] waiting for apiserver process to appear ...
	I1027 19:57:55.147513  445124 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:57:55.167547  445124 api_server.go:72] duration metric: took 13.737597397s to wait for apiserver process to appear ...
	I1027 19:57:55.167621  445124 api_server.go:88] waiting for apiserver healthz status ...
	I1027 19:57:55.167655  445124 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 19:57:55.179007  445124 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1027 19:57:55.185377  445124 api_server.go:141] control plane version: v1.28.0
	I1027 19:57:55.185411  445124 api_server.go:131] duration metric: took 17.76931ms to wait for apiserver health ...
	I1027 19:57:55.185421  445124 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 19:57:55.195690  445124 system_pods.go:59] 8 kube-system pods found
	I1027 19:57:55.195726  445124 system_pods.go:61] "coredns-5dd5756b68-fzdkv" [9bbe01db-3ae1-42aa-967e-299956b53a62] Pending
	I1027 19:57:55.195733  445124 system_pods.go:61] "etcd-old-k8s-version-942644" [36230dd8-98da-4474-9de3-442a07360ad0] Running
	I1027 19:57:55.195759  445124 system_pods.go:61] "kindnet-845vr" [368f6d7a-80df-484b-be42-7c5234ee7284] Running
	I1027 19:57:55.195766  445124 system_pods.go:61] "kube-apiserver-old-k8s-version-942644" [96e5d686-f0a5-4f0f-8add-28827ae8a8fc] Running
	I1027 19:57:55.195770  445124 system_pods.go:61] "kube-controller-manager-old-k8s-version-942644" [ee8d8d7f-1033-4249-843d-1943132c447f] Running
	I1027 19:57:55.195774  445124 system_pods.go:61] "kube-proxy-nbdp5" [68d29ee2-8cd1-4c17-895e-2a052615395b] Running
	I1027 19:57:55.195778  445124 system_pods.go:61] "kube-scheduler-old-k8s-version-942644" [54fb1b99-51ea-46ac-b6c1-3d78566e375e] Running
	I1027 19:57:55.195783  445124 system_pods.go:61] "storage-provisioner" [b7501b67-1642-472f-aec7-bf5f4cf46c0c] Pending
	I1027 19:57:55.195788  445124 system_pods.go:74] duration metric: took 10.362045ms to wait for pod list to return data ...
	I1027 19:57:55.195796  445124 default_sa.go:34] waiting for default service account to be created ...
	I1027 19:57:55.207074  445124 default_sa.go:45] found service account: "default"
	I1027 19:57:55.207104  445124 default_sa.go:55] duration metric: took 11.300758ms for default service account to be created ...
	I1027 19:57:55.207115  445124 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 19:57:55.212573  445124 system_pods.go:86] 8 kube-system pods found
	I1027 19:57:55.212611  445124 system_pods.go:89] "coredns-5dd5756b68-fzdkv" [9bbe01db-3ae1-42aa-967e-299956b53a62] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 19:57:55.212618  445124 system_pods.go:89] "etcd-old-k8s-version-942644" [36230dd8-98da-4474-9de3-442a07360ad0] Running
	I1027 19:57:55.212626  445124 system_pods.go:89] "kindnet-845vr" [368f6d7a-80df-484b-be42-7c5234ee7284] Running
	I1027 19:57:55.212632  445124 system_pods.go:89] "kube-apiserver-old-k8s-version-942644" [96e5d686-f0a5-4f0f-8add-28827ae8a8fc] Running
	I1027 19:57:55.212637  445124 system_pods.go:89] "kube-controller-manager-old-k8s-version-942644" [ee8d8d7f-1033-4249-843d-1943132c447f] Running
	I1027 19:57:55.212641  445124 system_pods.go:89] "kube-proxy-nbdp5" [68d29ee2-8cd1-4c17-895e-2a052615395b] Running
	I1027 19:57:55.212646  445124 system_pods.go:89] "kube-scheduler-old-k8s-version-942644" [54fb1b99-51ea-46ac-b6c1-3d78566e375e] Running
	I1027 19:57:55.212650  445124 system_pods.go:89] "storage-provisioner" [b7501b67-1642-472f-aec7-bf5f4cf46c0c] Pending
	I1027 19:57:55.212676  445124 retry.go:31] will retry after 303.634491ms: missing components: kube-dns
	I1027 19:57:55.521637  445124 system_pods.go:86] 8 kube-system pods found
	I1027 19:57:55.521666  445124 system_pods.go:89] "coredns-5dd5756b68-fzdkv" [9bbe01db-3ae1-42aa-967e-299956b53a62] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 19:57:55.521673  445124 system_pods.go:89] "etcd-old-k8s-version-942644" [36230dd8-98da-4474-9de3-442a07360ad0] Running
	I1027 19:57:55.521680  445124 system_pods.go:89] "kindnet-845vr" [368f6d7a-80df-484b-be42-7c5234ee7284] Running
	I1027 19:57:55.521686  445124 system_pods.go:89] "kube-apiserver-old-k8s-version-942644" [96e5d686-f0a5-4f0f-8add-28827ae8a8fc] Running
	I1027 19:57:55.521691  445124 system_pods.go:89] "kube-controller-manager-old-k8s-version-942644" [ee8d8d7f-1033-4249-843d-1943132c447f] Running
	I1027 19:57:55.521694  445124 system_pods.go:89] "kube-proxy-nbdp5" [68d29ee2-8cd1-4c17-895e-2a052615395b] Running
	I1027 19:57:55.521698  445124 system_pods.go:89] "kube-scheduler-old-k8s-version-942644" [54fb1b99-51ea-46ac-b6c1-3d78566e375e] Running
	I1027 19:57:55.521704  445124 system_pods.go:89] "storage-provisioner" [b7501b67-1642-472f-aec7-bf5f4cf46c0c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 19:57:55.521720  445124 retry.go:31] will retry after 251.589575ms: missing components: kube-dns
	I1027 19:57:55.778183  445124 system_pods.go:86] 8 kube-system pods found
	I1027 19:57:55.778217  445124 system_pods.go:89] "coredns-5dd5756b68-fzdkv" [9bbe01db-3ae1-42aa-967e-299956b53a62] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 19:57:55.778225  445124 system_pods.go:89] "etcd-old-k8s-version-942644" [36230dd8-98da-4474-9de3-442a07360ad0] Running
	I1027 19:57:55.778231  445124 system_pods.go:89] "kindnet-845vr" [368f6d7a-80df-484b-be42-7c5234ee7284] Running
	I1027 19:57:55.778236  445124 system_pods.go:89] "kube-apiserver-old-k8s-version-942644" [96e5d686-f0a5-4f0f-8add-28827ae8a8fc] Running
	I1027 19:57:55.778241  445124 system_pods.go:89] "kube-controller-manager-old-k8s-version-942644" [ee8d8d7f-1033-4249-843d-1943132c447f] Running
	I1027 19:57:55.778245  445124 system_pods.go:89] "kube-proxy-nbdp5" [68d29ee2-8cd1-4c17-895e-2a052615395b] Running
	I1027 19:57:55.778249  445124 system_pods.go:89] "kube-scheduler-old-k8s-version-942644" [54fb1b99-51ea-46ac-b6c1-3d78566e375e] Running
	I1027 19:57:55.778255  445124 system_pods.go:89] "storage-provisioner" [b7501b67-1642-472f-aec7-bf5f4cf46c0c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 19:57:55.778277  445124 retry.go:31] will retry after 347.031847ms: missing components: kube-dns
	I1027 19:57:56.130493  445124 system_pods.go:86] 8 kube-system pods found
	I1027 19:57:56.130528  445124 system_pods.go:89] "coredns-5dd5756b68-fzdkv" [9bbe01db-3ae1-42aa-967e-299956b53a62] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 19:57:56.130536  445124 system_pods.go:89] "etcd-old-k8s-version-942644" [36230dd8-98da-4474-9de3-442a07360ad0] Running
	I1027 19:57:56.130542  445124 system_pods.go:89] "kindnet-845vr" [368f6d7a-80df-484b-be42-7c5234ee7284] Running
	I1027 19:57:56.130548  445124 system_pods.go:89] "kube-apiserver-old-k8s-version-942644" [96e5d686-f0a5-4f0f-8add-28827ae8a8fc] Running
	I1027 19:57:56.130572  445124 system_pods.go:89] "kube-controller-manager-old-k8s-version-942644" [ee8d8d7f-1033-4249-843d-1943132c447f] Running
	I1027 19:57:56.130583  445124 system_pods.go:89] "kube-proxy-nbdp5" [68d29ee2-8cd1-4c17-895e-2a052615395b] Running
	I1027 19:57:56.130588  445124 system_pods.go:89] "kube-scheduler-old-k8s-version-942644" [54fb1b99-51ea-46ac-b6c1-3d78566e375e] Running
	I1027 19:57:56.130602  445124 system_pods.go:89] "storage-provisioner" [b7501b67-1642-472f-aec7-bf5f4cf46c0c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 19:57:56.130617  445124 retry.go:31] will retry after 579.437574ms: missing components: kube-dns
	I1027 19:57:56.715024  445124 system_pods.go:86] 8 kube-system pods found
	I1027 19:57:56.715056  445124 system_pods.go:89] "coredns-5dd5756b68-fzdkv" [9bbe01db-3ae1-42aa-967e-299956b53a62] Running
	I1027 19:57:56.715063  445124 system_pods.go:89] "etcd-old-k8s-version-942644" [36230dd8-98da-4474-9de3-442a07360ad0] Running
	I1027 19:57:56.715069  445124 system_pods.go:89] "kindnet-845vr" [368f6d7a-80df-484b-be42-7c5234ee7284] Running
	I1027 19:57:56.715075  445124 system_pods.go:89] "kube-apiserver-old-k8s-version-942644" [96e5d686-f0a5-4f0f-8add-28827ae8a8fc] Running
	I1027 19:57:56.715080  445124 system_pods.go:89] "kube-controller-manager-old-k8s-version-942644" [ee8d8d7f-1033-4249-843d-1943132c447f] Running
	I1027 19:57:56.715084  445124 system_pods.go:89] "kube-proxy-nbdp5" [68d29ee2-8cd1-4c17-895e-2a052615395b] Running
	I1027 19:57:56.715097  445124 system_pods.go:89] "kube-scheduler-old-k8s-version-942644" [54fb1b99-51ea-46ac-b6c1-3d78566e375e] Running
	I1027 19:57:56.715102  445124 system_pods.go:89] "storage-provisioner" [b7501b67-1642-472f-aec7-bf5f4cf46c0c] Running
	I1027 19:57:56.715111  445124 system_pods.go:126] duration metric: took 1.507988078s to wait for k8s-apps to be running ...
	I1027 19:57:56.715121  445124 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 19:57:56.715181  445124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:57:56.729600  445124 system_svc.go:56] duration metric: took 14.465337ms WaitForService to wait for kubelet
	I1027 19:57:56.729637  445124 kubeadm.go:586] duration metric: took 15.299692045s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 19:57:56.729657  445124 node_conditions.go:102] verifying NodePressure condition ...
	I1027 19:57:56.732461  445124 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 19:57:56.732493  445124 node_conditions.go:123] node cpu capacity is 2
	I1027 19:57:56.732508  445124 node_conditions.go:105] duration metric: took 2.845203ms to run NodePressure ...
	I1027 19:57:56.732520  445124 start.go:241] waiting for startup goroutines ...
	I1027 19:57:56.732527  445124 start.go:246] waiting for cluster config update ...
	I1027 19:57:56.732553  445124 start.go:255] writing updated cluster config ...
	I1027 19:57:56.732867  445124 ssh_runner.go:195] Run: rm -f paused
	I1027 19:57:56.736874  445124 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 19:57:56.741116  445124 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-fzdkv" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:57:56.745957  445124 pod_ready.go:94] pod "coredns-5dd5756b68-fzdkv" is "Ready"
	I1027 19:57:56.745985  445124 pod_ready.go:86] duration metric: took 4.844572ms for pod "coredns-5dd5756b68-fzdkv" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:57:56.749062  445124 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-942644" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:57:56.753536  445124 pod_ready.go:94] pod "etcd-old-k8s-version-942644" is "Ready"
	I1027 19:57:56.753561  445124 pod_ready.go:86] duration metric: took 4.474523ms for pod "etcd-old-k8s-version-942644" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:57:56.756589  445124 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-942644" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:57:56.761194  445124 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-942644" is "Ready"
	I1027 19:57:56.761219  445124 pod_ready.go:86] duration metric: took 4.602545ms for pod "kube-apiserver-old-k8s-version-942644" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:57:56.764042  445124 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-942644" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:57:57.141747  445124 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-942644" is "Ready"
	I1027 19:57:57.141777  445124 pod_ready.go:86] duration metric: took 377.708611ms for pod "kube-controller-manager-old-k8s-version-942644" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:57:57.341704  445124 pod_ready.go:83] waiting for pod "kube-proxy-nbdp5" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:57:57.741027  445124 pod_ready.go:94] pod "kube-proxy-nbdp5" is "Ready"
	I1027 19:57:57.741053  445124 pod_ready.go:86] duration metric: took 399.323949ms for pod "kube-proxy-nbdp5" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:57:57.942039  445124 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-942644" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:57:58.341582  445124 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-942644" is "Ready"
	I1027 19:57:58.341665  445124 pod_ready.go:86] duration metric: took 399.600742ms for pod "kube-scheduler-old-k8s-version-942644" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:57:58.341694  445124 pod_ready.go:40] duration metric: took 1.604790451s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 19:57:58.406597  445124 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1027 19:57:58.409708  445124 out.go:203] 
	W1027 19:57:58.412538  445124 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1027 19:57:58.417805  445124 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1027 19:57:58.420712  445124 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-942644" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 27 19:57:55 old-k8s-version-942644 crio[839]: time="2025-10-27T19:57:55.570385417Z" level=info msg="Created container 4ee64c69d7d0f0a3c9c152df1d443e07649aa1672fad57182e778cd6014f6492: kube-system/coredns-5dd5756b68-fzdkv/coredns" id=a39ec3da-bd0f-449d-8195-6ab3851a1fee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:57:55 old-k8s-version-942644 crio[839]: time="2025-10-27T19:57:55.571591159Z" level=info msg="Starting container: 4ee64c69d7d0f0a3c9c152df1d443e07649aa1672fad57182e778cd6014f6492" id=9b1e7b30-a87a-4ae4-94a5-e8b712bd2bba name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:57:55 old-k8s-version-942644 crio[839]: time="2025-10-27T19:57:55.575204243Z" level=info msg="Started container" PID=1942 containerID=4ee64c69d7d0f0a3c9c152df1d443e07649aa1672fad57182e778cd6014f6492 description=kube-system/coredns-5dd5756b68-fzdkv/coredns id=9b1e7b30-a87a-4ae4-94a5-e8b712bd2bba name=/runtime.v1.RuntimeService/StartContainer sandboxID=a5ca4956cc80a202ce448e52e7603caa726e60d474caf0558cab5a9eeb5391f1
	Oct 27 19:57:58 old-k8s-version-942644 crio[839]: time="2025-10-27T19:57:58.95619372Z" level=info msg="Running pod sandbox: default/busybox/POD" id=8caac8bb-bf4e-4e18-a2ff-e02de16ed011 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 19:57:58 old-k8s-version-942644 crio[839]: time="2025-10-27T19:57:58.956267555Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:57:58 old-k8s-version-942644 crio[839]: time="2025-10-27T19:57:58.962506066Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ec34e98dce97048a1737d19a0a5ee5ebd42acb718937aee4438679810b626db4 UID:54b4e94f-69ec-4136-8574-9416e44e9e48 NetNS:/var/run/netns/9df9f3f0-a6f8-409c-bc0c-243e9672f2c2 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079e48}] Aliases:map[]}"
	Oct 27 19:57:58 old-k8s-version-942644 crio[839]: time="2025-10-27T19:57:58.962543439Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 27 19:57:58 old-k8s-version-942644 crio[839]: time="2025-10-27T19:57:58.974320845Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:ec34e98dce97048a1737d19a0a5ee5ebd42acb718937aee4438679810b626db4 UID:54b4e94f-69ec-4136-8574-9416e44e9e48 NetNS:/var/run/netns/9df9f3f0-a6f8-409c-bc0c-243e9672f2c2 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079e48}] Aliases:map[]}"
	Oct 27 19:57:58 old-k8s-version-942644 crio[839]: time="2025-10-27T19:57:58.974510493Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 27 19:57:58 old-k8s-version-942644 crio[839]: time="2025-10-27T19:57:58.978493666Z" level=info msg="Ran pod sandbox ec34e98dce97048a1737d19a0a5ee5ebd42acb718937aee4438679810b626db4 with infra container: default/busybox/POD" id=8caac8bb-bf4e-4e18-a2ff-e02de16ed011 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 19:57:58 old-k8s-version-942644 crio[839]: time="2025-10-27T19:57:58.981278744Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3b3345d6-33d0-4e3e-8173-3b7bbb3b2982 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:57:58 old-k8s-version-942644 crio[839]: time="2025-10-27T19:57:58.981530462Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=3b3345d6-33d0-4e3e-8173-3b7bbb3b2982 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:57:58 old-k8s-version-942644 crio[839]: time="2025-10-27T19:57:58.981637938Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=3b3345d6-33d0-4e3e-8173-3b7bbb3b2982 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:57:58 old-k8s-version-942644 crio[839]: time="2025-10-27T19:57:58.982978093Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6d3fc37f-5763-431b-8094-d0017360d133 name=/runtime.v1.ImageService/PullImage
	Oct 27 19:57:58 old-k8s-version-942644 crio[839]: time="2025-10-27T19:57:58.985996198Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 27 19:58:01 old-k8s-version-942644 crio[839]: time="2025-10-27T19:58:01.073417121Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=6d3fc37f-5763-431b-8094-d0017360d133 name=/runtime.v1.ImageService/PullImage
	Oct 27 19:58:01 old-k8s-version-942644 crio[839]: time="2025-10-27T19:58:01.074704321Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8a87809c-5d1e-4ec1-a585-f3fd630a5ffc name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:58:01 old-k8s-version-942644 crio[839]: time="2025-10-27T19:58:01.079264512Z" level=info msg="Creating container: default/busybox/busybox" id=eb5ea5c5-1461-43d1-94bd-3aa73ab5e456 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:58:01 old-k8s-version-942644 crio[839]: time="2025-10-27T19:58:01.07955033Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:58:01 old-k8s-version-942644 crio[839]: time="2025-10-27T19:58:01.084445288Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:58:01 old-k8s-version-942644 crio[839]: time="2025-10-27T19:58:01.085062674Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:58:01 old-k8s-version-942644 crio[839]: time="2025-10-27T19:58:01.104424464Z" level=info msg="Created container 72839546d99177ae32ccebf9d1b2531b8c3b8cffd83e7e08c04b3af999056e4a: default/busybox/busybox" id=eb5ea5c5-1461-43d1-94bd-3aa73ab5e456 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:58:01 old-k8s-version-942644 crio[839]: time="2025-10-27T19:58:01.105128846Z" level=info msg="Starting container: 72839546d99177ae32ccebf9d1b2531b8c3b8cffd83e7e08c04b3af999056e4a" id=5794a3bf-0b5b-4839-902c-974d418c643e name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:58:01 old-k8s-version-942644 crio[839]: time="2025-10-27T19:58:01.109025105Z" level=info msg="Started container" PID=1995 containerID=72839546d99177ae32ccebf9d1b2531b8c3b8cffd83e7e08c04b3af999056e4a description=default/busybox/busybox id=5794a3bf-0b5b-4839-902c-974d418c643e name=/runtime.v1.RuntimeService/StartContainer sandboxID=ec34e98dce97048a1737d19a0a5ee5ebd42acb718937aee4438679810b626db4
	Oct 27 19:58:07 old-k8s-version-942644 crio[839]: time="2025-10-27T19:58:07.800673384Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	72839546d9917       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   ec34e98dce970       busybox                                          default
	4ee64c69d7d0f       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      13 seconds ago      Running             coredns                   0                   a5ca4956cc80a       coredns-5dd5756b68-fzdkv                         kube-system
	5f3f133f797dd       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago      Running             storage-provisioner       0                   dd462daa2a0bd       storage-provisioner                              kube-system
	4ceb87c14524b       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    24 seconds ago      Running             kindnet-cni               0                   a61070ad9075a       kindnet-845vr                                    kube-system
	24a5d5aa45637       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      24 seconds ago      Running             kube-proxy                0                   581300aea234a       kube-proxy-nbdp5                                 kube-system
	fa35a918428fc       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      48 seconds ago      Running             kube-scheduler            0                   d4197415620e6       kube-scheduler-old-k8s-version-942644            kube-system
	063cdd1839cfd       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      48 seconds ago      Running             kube-controller-manager   0                   361ca95807343       kube-controller-manager-old-k8s-version-942644   kube-system
	922e36a906dd5       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      48 seconds ago      Running             kube-apiserver            0                   c0a8eb50e2e62       kube-apiserver-old-k8s-version-942644            kube-system
	1d44b63df7c4a       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      48 seconds ago      Running             etcd                      0                   214d7e9cdc0b3       etcd-old-k8s-version-942644                      kube-system
	
	
	==> coredns [4ee64c69d7d0f0a3c9c152df1d443e07649aa1672fad57182e778cd6014f6492] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:52698 - 714 "HINFO IN 2200748266258023286.1973460641657816094. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01190331s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-942644
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-942644
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=old-k8s-version-942644
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T19_57_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 19:57:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-942644
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 19:58:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 19:57:58 +0000   Mon, 27 Oct 2025 19:57:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 19:57:58 +0000   Mon, 27 Oct 2025 19:57:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 19:57:58 +0000   Mon, 27 Oct 2025 19:57:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 19:57:58 +0000   Mon, 27 Oct 2025 19:57:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-942644
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                effa8846-d81b-42a0-8993-bf5b12f2eae0
	  Boot ID:                    23ea05b4-8203-4fb9-a84a-5deb4b091cbb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-fzdkv                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     28s
	  kube-system                 etcd-old-k8s-version-942644                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         44s
	  kube-system                 kindnet-845vr                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-old-k8s-version-942644             250m (12%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-942644    200m (10%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-nbdp5                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-old-k8s-version-942644             100m (5%)     0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24s                kube-proxy       
	  Normal  NodeHasSufficientMemory  49s (x8 over 49s)  kubelet          Node old-k8s-version-942644 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    49s (x8 over 49s)  kubelet          Node old-k8s-version-942644 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     49s (x8 over 49s)  kubelet          Node old-k8s-version-942644 status is now: NodeHasSufficientPID
	  Normal  Starting                 41s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s                kubelet          Node old-k8s-version-942644 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s                kubelet          Node old-k8s-version-942644 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s                kubelet          Node old-k8s-version-942644 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s                node-controller  Node old-k8s-version-942644 event: Registered Node old-k8s-version-942644 in Controller
	  Normal  NodeReady                14s                kubelet          Node old-k8s-version-942644 status is now: NodeReady
	
	
	==> dmesg <==
	[ +40.518952] overlayfs: idmapped layers are currently not supported
	[Oct27 19:29] overlayfs: idmapped layers are currently not supported
	[Oct27 19:34] overlayfs: idmapped layers are currently not supported
	[ +33.986700] overlayfs: idmapped layers are currently not supported
	[Oct27 19:36] overlayfs: idmapped layers are currently not supported
	[Oct27 19:37] overlayfs: idmapped layers are currently not supported
	[Oct27 19:38] overlayfs: idmapped layers are currently not supported
	[Oct27 19:39] overlayfs: idmapped layers are currently not supported
	[Oct27 19:40] overlayfs: idmapped layers are currently not supported
	[  +7.015891] overlayfs: idmapped layers are currently not supported
	[Oct27 19:41] overlayfs: idmapped layers are currently not supported
	[ +28.455216] overlayfs: idmapped layers are currently not supported
	[Oct27 19:42] overlayfs: idmapped layers are currently not supported
	[ +26.490174] overlayfs: idmapped layers are currently not supported
	[Oct27 19:44] overlayfs: idmapped layers are currently not supported
	[Oct27 19:45] overlayfs: idmapped layers are currently not supported
	[Oct27 19:47] overlayfs: idmapped layers are currently not supported
	[Oct27 19:49] overlayfs: idmapped layers are currently not supported
	[ +31.410335] overlayfs: idmapped layers are currently not supported
	[Oct27 19:51] overlayfs: idmapped layers are currently not supported
	[Oct27 19:53] overlayfs: idmapped layers are currently not supported
	[Oct27 19:54] overlayfs: idmapped layers are currently not supported
	[Oct27 19:55] overlayfs: idmapped layers are currently not supported
	[Oct27 19:56] overlayfs: idmapped layers are currently not supported
	[Oct27 19:57] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [1d44b63df7c4a2df458b0765ef22bd595c1d057c57a3983af3f49e3677c0be4e] <==
	{"level":"info","ts":"2025-10-27T19:57:21.009813Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-27T19:57:21.006654Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-27T19:57:21.01007Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-27T19:57:21.006688Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-27T19:57:21.011049Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-27T19:57:21.011266Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-27T19:57:21.011739Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-27T19:57:21.866761Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-10-27T19:57:21.86687Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-10-27T19:57:21.866916Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-10-27T19:57:21.866954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-10-27T19:57:21.867021Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-27T19:57:21.867067Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-10-27T19:57:21.867103Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-27T19:57:21.869605Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-27T19:57:21.872488Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-942644 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-27T19:57:21.872566Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-27T19:57:21.873278Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-27T19:57:21.877295Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-27T19:57:21.877366Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-27T19:57:21.877575Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-27T19:57:21.878068Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-27T19:57:21.878552Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-27T19:57:21.878653Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-27T19:57:21.878689Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:58:09 up  2:40,  0 user,  load average: 1.44, 2.53, 2.39
	Linux old-k8s-version-942644 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4ceb87c14524b3250afd247b7b9692ab135d326776190ecbe128745b3fc0041a] <==
	I1027 19:57:44.715804       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 19:57:44.716027       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1027 19:57:44.716155       1 main.go:148] setting mtu 1500 for CNI 
	I1027 19:57:44.716176       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 19:57:44.716186       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T19:57:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 19:57:44.917376       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 19:57:44.917460       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 19:57:44.917493       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 19:57:44.918047       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1027 19:57:45.116685       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 19:57:45.116796       1 metrics.go:72] Registering metrics
	I1027 19:57:45.116890       1 controller.go:711] "Syncing nftables rules"
	I1027 19:57:54.921181       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1027 19:57:54.921237       1 main.go:301] handling current node
	I1027 19:58:04.919138       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1027 19:58:04.919170       1 main.go:301] handling current node
	
	
	==> kube-apiserver [922e36a906dd5d652f3d8252a74e40985c4030dfadcd7b9fffa78f6a4184c929] <==
	I1027 19:57:24.760494       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1027 19:57:24.760544       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1027 19:57:24.762288       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1027 19:57:24.762825       1 controller.go:624] quota admission added evaluator for: namespaces
	I1027 19:57:24.763068       1 aggregator.go:166] initial CRD sync complete...
	I1027 19:57:24.763145       1 autoregister_controller.go:141] Starting autoregister controller
	I1027 19:57:24.763213       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 19:57:24.763267       1 cache.go:39] Caches are synced for autoregister controller
	I1027 19:57:24.771246       1 shared_informer.go:318] Caches are synced for configmaps
	I1027 19:57:24.796055       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 19:57:25.466809       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1027 19:57:25.472538       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1027 19:57:25.472562       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 19:57:26.117684       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 19:57:26.199378       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 19:57:26.309618       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1027 19:57:26.316811       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1027 19:57:26.317857       1 controller.go:624] quota admission added evaluator for: endpoints
	I1027 19:57:26.322494       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 19:57:26.729675       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1027 19:57:27.993175       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1027 19:57:28.012948       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1027 19:57:28.031837       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1027 19:57:41.077617       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1027 19:57:41.114437       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [063cdd1839cfdc9f3405e4c5c0c16a62d80863c2b1c1b128627955dbaa3fcbd4] <==
	I1027 19:57:41.138715       1 shared_informer.go:318] Caches are synced for stateful set
	I1027 19:57:41.143579       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-845vr"
	I1027 19:57:41.150130       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-nbdp5"
	I1027 19:57:41.160873       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-6j49j"
	I1027 19:57:41.184997       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-fzdkv"
	I1027 19:57:41.199647       1 shared_informer.go:318] Caches are synced for resource quota
	I1027 19:57:41.209918       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="117.181629ms"
	I1027 19:57:41.238815       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="28.694382ms"
	I1027 19:57:41.239477       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="95.291µs"
	I1027 19:57:41.564740       1 shared_informer.go:318] Caches are synced for garbage collector
	I1027 19:57:41.571056       1 shared_informer.go:318] Caches are synced for garbage collector
	I1027 19:57:41.571089       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1027 19:57:42.696635       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1027 19:57:42.721369       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-6j49j"
	I1027 19:57:42.738155       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="42.378868ms"
	I1027 19:57:42.757894       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.691938ms"
	I1027 19:57:42.758637       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="44.675µs"
	I1027 19:57:55.179989       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="97.31µs"
	I1027 19:57:55.203603       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="72.63µs"
	I1027 19:57:56.278079       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1027 19:57:56.280598       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68-fzdkv" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-5dd5756b68-fzdkv"
	I1027 19:57:56.280904       1 event.go:307] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I1027 19:57:56.371478       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="88.367µs"
	I1027 19:57:56.416727       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.806109ms"
	I1027 19:57:56.417000       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="138.769µs"
	
	
	==> kube-proxy [24a5d5aa456378e9e453b6e66e1b664c94bcb340cd0d6c03a571bd00d0800504] <==
	I1027 19:57:44.550641       1 server_others.go:69] "Using iptables proxy"
	I1027 19:57:44.570810       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1027 19:57:44.608131       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 19:57:44.612347       1 server_others.go:152] "Using iptables Proxier"
	I1027 19:57:44.612382       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1027 19:57:44.612390       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1027 19:57:44.612430       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1027 19:57:44.612742       1 server.go:846] "Version info" version="v1.28.0"
	I1027 19:57:44.612754       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:57:44.617794       1 config.go:188] "Starting service config controller"
	I1027 19:57:44.617829       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1027 19:57:44.617852       1 config.go:97] "Starting endpoint slice config controller"
	I1027 19:57:44.617856       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1027 19:57:44.620431       1 config.go:315] "Starting node config controller"
	I1027 19:57:44.620456       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1027 19:57:44.718652       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1027 19:57:44.719441       1 shared_informer.go:318] Caches are synced for service config
	I1027 19:57:44.720947       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [fa35a918428fcf3236c2a922c17fb7a096e66ec71562d772be8ce65e63b141fc] <==
	W1027 19:57:24.753361       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1027 19:57:24.753576       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1027 19:57:24.753403       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1027 19:57:24.753658       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1027 19:57:24.753469       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1027 19:57:24.753810       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1027 19:57:24.753970       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1027 19:57:24.754000       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1027 19:57:24.754756       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1027 19:57:24.754827       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1027 19:57:25.702452       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1027 19:57:25.702565       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1027 19:57:25.702707       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1027 19:57:25.702750       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1027 19:57:25.735088       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1027 19:57:25.735198       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1027 19:57:25.740799       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1027 19:57:25.740900       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1027 19:57:25.815855       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1027 19:57:25.816013       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1027 19:57:25.849453       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1027 19:57:25.849554       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1027 19:57:25.857136       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1027 19:57:25.857243       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1027 19:57:27.941279       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 27 19:57:41 old-k8s-version-942644 kubelet[1368]: I1027 19:57:41.214125    1368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/368f6d7a-80df-484b-be42-7c5234ee7284-cni-cfg\") pod \"kindnet-845vr\" (UID: \"368f6d7a-80df-484b-be42-7c5234ee7284\") " pod="kube-system/kindnet-845vr"
	Oct 27 19:57:41 old-k8s-version-942644 kubelet[1368]: I1027 19:57:41.214185    1368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/368f6d7a-80df-484b-be42-7c5234ee7284-xtables-lock\") pod \"kindnet-845vr\" (UID: \"368f6d7a-80df-484b-be42-7c5234ee7284\") " pod="kube-system/kindnet-845vr"
	Oct 27 19:57:41 old-k8s-version-942644 kubelet[1368]: I1027 19:57:41.214237    1368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/368f6d7a-80df-484b-be42-7c5234ee7284-lib-modules\") pod \"kindnet-845vr\" (UID: \"368f6d7a-80df-484b-be42-7c5234ee7284\") " pod="kube-system/kindnet-845vr"
	Oct 27 19:57:41 old-k8s-version-942644 kubelet[1368]: I1027 19:57:41.214276    1368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt6bw\" (UniqueName: \"kubernetes.io/projected/368f6d7a-80df-484b-be42-7c5234ee7284-kube-api-access-nt6bw\") pod \"kindnet-845vr\" (UID: \"368f6d7a-80df-484b-be42-7c5234ee7284\") " pod="kube-system/kindnet-845vr"
	Oct 27 19:57:41 old-k8s-version-942644 kubelet[1368]: I1027 19:57:41.214305    1368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68d29ee2-8cd1-4c17-895e-2a052615395b-xtables-lock\") pod \"kube-proxy-nbdp5\" (UID: \"68d29ee2-8cd1-4c17-895e-2a052615395b\") " pod="kube-system/kube-proxy-nbdp5"
	Oct 27 19:57:41 old-k8s-version-942644 kubelet[1368]: I1027 19:57:41.214326    1368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68d29ee2-8cd1-4c17-895e-2a052615395b-lib-modules\") pod \"kube-proxy-nbdp5\" (UID: \"68d29ee2-8cd1-4c17-895e-2a052615395b\") " pod="kube-system/kube-proxy-nbdp5"
	Oct 27 19:57:41 old-k8s-version-942644 kubelet[1368]: W1027 19:57:41.545500    1368 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/10950a3c65bf121769a3f4633bb765a9cdb0464d8195fe6bebdec626b5deca5a/crio-a61070ad9075a6bbc3153593fc56323358023b19b935bcfee518a02555438151 WatchSource:0}: Error finding container a61070ad9075a6bbc3153593fc56323358023b19b935bcfee518a02555438151: Status 404 returned error can't find the container with id a61070ad9075a6bbc3153593fc56323358023b19b935bcfee518a02555438151
	Oct 27 19:57:42 old-k8s-version-942644 kubelet[1368]: E1027 19:57:42.316019    1368 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Oct 27 19:57:42 old-k8s-version-942644 kubelet[1368]: E1027 19:57:42.316609    1368 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/68d29ee2-8cd1-4c17-895e-2a052615395b-kube-proxy podName:68d29ee2-8cd1-4c17-895e-2a052615395b nodeName:}" failed. No retries permitted until 2025-10-27 19:57:42.816575648 +0000 UTC m=+14.868983209 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/68d29ee2-8cd1-4c17-895e-2a052615395b-kube-proxy") pod "kube-proxy-nbdp5" (UID: "68d29ee2-8cd1-4c17-895e-2a052615395b") : failed to sync configmap cache: timed out waiting for the condition
	Oct 27 19:57:43 old-k8s-version-942644 kubelet[1368]: W1027 19:57:43.034038    1368 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/10950a3c65bf121769a3f4633bb765a9cdb0464d8195fe6bebdec626b5deca5a/crio-581300aea234a8d2e3350d72f795d40599d3def1e7d6938f0a97760ccc507ace WatchSource:0}: Error finding container 581300aea234a8d2e3350d72f795d40599d3def1e7d6938f0a97760ccc507ace: Status 404 returned error can't find the container with id 581300aea234a8d2e3350d72f795d40599d3def1e7d6938f0a97760ccc507ace
	Oct 27 19:57:45 old-k8s-version-942644 kubelet[1368]: I1027 19:57:45.388453    1368 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-nbdp5" podStartSLOduration=4.388400723 podCreationTimestamp="2025-10-27 19:57:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:57:45.383094256 +0000 UTC m=+17.435501833" watchObservedRunningTime="2025-10-27 19:57:45.388400723 +0000 UTC m=+17.440808283"
	Oct 27 19:57:48 old-k8s-version-942644 kubelet[1368]: I1027 19:57:48.224062    1368 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-845vr" podStartSLOduration=4.228908983 podCreationTimestamp="2025-10-27 19:57:41 +0000 UTC" firstStartedPulling="2025-10-27 19:57:41.576855543 +0000 UTC m=+13.629263104" lastFinishedPulling="2025-10-27 19:57:44.571961977 +0000 UTC m=+16.624369546" observedRunningTime="2025-10-27 19:57:45.417988963 +0000 UTC m=+17.470396532" watchObservedRunningTime="2025-10-27 19:57:48.224015425 +0000 UTC m=+20.276422986"
	Oct 27 19:57:55 old-k8s-version-942644 kubelet[1368]: I1027 19:57:55.136755    1368 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 27 19:57:55 old-k8s-version-942644 kubelet[1368]: I1027 19:57:55.174327    1368 topology_manager.go:215] "Topology Admit Handler" podUID="9bbe01db-3ae1-42aa-967e-299956b53a62" podNamespace="kube-system" podName="coredns-5dd5756b68-fzdkv"
	Oct 27 19:57:55 old-k8s-version-942644 kubelet[1368]: I1027 19:57:55.183534    1368 topology_manager.go:215] "Topology Admit Handler" podUID="b7501b67-1642-472f-aec7-bf5f4cf46c0c" podNamespace="kube-system" podName="storage-provisioner"
	Oct 27 19:57:55 old-k8s-version-942644 kubelet[1368]: I1027 19:57:55.231303    1368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9bbe01db-3ae1-42aa-967e-299956b53a62-config-volume\") pod \"coredns-5dd5756b68-fzdkv\" (UID: \"9bbe01db-3ae1-42aa-967e-299956b53a62\") " pod="kube-system/coredns-5dd5756b68-fzdkv"
	Oct 27 19:57:55 old-k8s-version-942644 kubelet[1368]: I1027 19:57:55.231365    1368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b7501b67-1642-472f-aec7-bf5f4cf46c0c-tmp\") pod \"storage-provisioner\" (UID: \"b7501b67-1642-472f-aec7-bf5f4cf46c0c\") " pod="kube-system/storage-provisioner"
	Oct 27 19:57:55 old-k8s-version-942644 kubelet[1368]: I1027 19:57:55.231406    1368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqczg\" (UniqueName: \"kubernetes.io/projected/9bbe01db-3ae1-42aa-967e-299956b53a62-kube-api-access-rqczg\") pod \"coredns-5dd5756b68-fzdkv\" (UID: \"9bbe01db-3ae1-42aa-967e-299956b53a62\") " pod="kube-system/coredns-5dd5756b68-fzdkv"
	Oct 27 19:57:55 old-k8s-version-942644 kubelet[1368]: I1027 19:57:55.231430    1368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx5s7\" (UniqueName: \"kubernetes.io/projected/b7501b67-1642-472f-aec7-bf5f4cf46c0c-kube-api-access-lx5s7\") pod \"storage-provisioner\" (UID: \"b7501b67-1642-472f-aec7-bf5f4cf46c0c\") " pod="kube-system/storage-provisioner"
	Oct 27 19:57:55 old-k8s-version-942644 kubelet[1368]: W1027 19:57:55.497750    1368 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/10950a3c65bf121769a3f4633bb765a9cdb0464d8195fe6bebdec626b5deca5a/crio-dd462daa2a0bd644f1d62844ae260c7dc5e901df1514bb5176ee707b22299398 WatchSource:0}: Error finding container dd462daa2a0bd644f1d62844ae260c7dc5e901df1514bb5176ee707b22299398: Status 404 returned error can't find the container with id dd462daa2a0bd644f1d62844ae260c7dc5e901df1514bb5176ee707b22299398
	Oct 27 19:57:56 old-k8s-version-942644 kubelet[1368]: I1027 19:57:56.394669    1368 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-fzdkv" podStartSLOduration=15.394615907 podCreationTimestamp="2025-10-27 19:57:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:57:56.372334594 +0000 UTC m=+28.424742229" watchObservedRunningTime="2025-10-27 19:57:56.394615907 +0000 UTC m=+28.447023476"
	Oct 27 19:57:58 old-k8s-version-942644 kubelet[1368]: I1027 19:57:58.653956    1368 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.653913257 podCreationTimestamp="2025-10-27 19:57:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:57:56.411658149 +0000 UTC m=+28.464065709" watchObservedRunningTime="2025-10-27 19:57:58.653913257 +0000 UTC m=+30.706320826"
	Oct 27 19:57:58 old-k8s-version-942644 kubelet[1368]: I1027 19:57:58.654143    1368 topology_manager.go:215] "Topology Admit Handler" podUID="54b4e94f-69ec-4136-8574-9416e44e9e48" podNamespace="default" podName="busybox"
	Oct 27 19:57:58 old-k8s-version-942644 kubelet[1368]: I1027 19:57:58.668116    1368 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv69s\" (UniqueName: \"kubernetes.io/projected/54b4e94f-69ec-4136-8574-9416e44e9e48-kube-api-access-pv69s\") pod \"busybox\" (UID: \"54b4e94f-69ec-4136-8574-9416e44e9e48\") " pod="default/busybox"
	Oct 27 19:58:01 old-k8s-version-942644 kubelet[1368]: I1027 19:58:01.392474    1368 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.301021139 podCreationTimestamp="2025-10-27 19:57:58 +0000 UTC" firstStartedPulling="2025-10-27 19:57:58.982346858 +0000 UTC m=+31.034754419" lastFinishedPulling="2025-10-27 19:58:01.073754785 +0000 UTC m=+33.126162346" observedRunningTime="2025-10-27 19:58:01.392230179 +0000 UTC m=+33.444637740" watchObservedRunningTime="2025-10-27 19:58:01.392429066 +0000 UTC m=+33.444836627"
	
	
	==> storage-provisioner [5f3f133f797dd4c4f21de53699a900a8771e8bb8a97caa3254c211c3a4020343] <==
	I1027 19:57:55.565181       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1027 19:57:55.581949       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 19:57:55.582057       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1027 19:57:55.598031       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 19:57:55.598266       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-942644_0c01253e-6adb-4569-8574-e3def0ab01c3!
	I1027 19:57:55.605044       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ba540a69-e48f-48b4-a3e1-f6e693f646a8", APIVersion:"v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-942644_0c01253e-6adb-4569-8574-e3def0ab01c3 became leader
	I1027 19:57:55.700326       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-942644_0c01253e-6adb-4569-8574-e3def0ab01c3!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-942644 -n old-k8s-version-942644
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-942644 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (8.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-942644 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-942644 --alsologtostderr -v=1: exit status 80 (2.445345205s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-942644 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:59:29.742366  454583 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:59:29.743800  454583 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:59:29.743850  454583 out.go:374] Setting ErrFile to fd 2...
	I1027 19:59:29.743871  454583 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:59:29.744155  454583 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 19:59:29.744458  454583 out.go:368] Setting JSON to false
	I1027 19:59:29.744505  454583 mustload.go:65] Loading cluster: old-k8s-version-942644
	I1027 19:59:29.744929  454583 config.go:182] Loaded profile config "old-k8s-version-942644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1027 19:59:29.751428  454583 cli_runner.go:164] Run: docker container inspect old-k8s-version-942644 --format={{.State.Status}}
	I1027 19:59:29.795412  454583 host.go:66] Checking if "old-k8s-version-942644" exists ...
	I1027 19:59:29.795748  454583 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:59:29.921316  454583 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:true NGoroutines:78 SystemTime:2025-10-27 19:59:29.908632931 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 19:59:29.922141  454583 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-942644 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1027 19:59:29.924180  454583 out.go:179] * Pausing node old-k8s-version-942644 ... 
	I1027 19:59:29.925485  454583 host.go:66] Checking if "old-k8s-version-942644" exists ...
	I1027 19:59:29.925792  454583 ssh_runner.go:195] Run: systemctl --version
	I1027 19:59:29.925834  454583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-942644
	I1027 19:59:29.960775  454583 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33413 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/old-k8s-version-942644/id_rsa Username:docker}
	I1027 19:59:30.091330  454583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:59:30.132104  454583 pause.go:52] kubelet running: true
	I1027 19:59:30.132169  454583 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 19:59:30.430676  454583 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 19:59:30.430799  454583 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 19:59:30.544738  454583 cri.go:89] found id: "955dc230362e096be0e14119979eeb4b516307eceab1bee2309c5c10aee85887"
	I1027 19:59:30.544830  454583 cri.go:89] found id: "489f480edd095f2ec8dafa5787de84eb1a9ed7d0820e496497cd82557cb54df5"
	I1027 19:59:30.544836  454583 cri.go:89] found id: "4d661e9051c59fc87cc15877e6a8f433cac6f4f3e16430714cea6c010b259343"
	I1027 19:59:30.544840  454583 cri.go:89] found id: "499383b8d8fc1d11df4a7905d477e6e6830cc86dd0bc67d3eb588824cec2dc07"
	I1027 19:59:30.544844  454583 cri.go:89] found id: "7a2d5a71f412b243de7e7e81e23b51bc4017375d2bae9648942dc2819590c31d"
	I1027 19:59:30.544847  454583 cri.go:89] found id: "4191fbc773c7860df74d9c43a79eaa2b2c2fedf87a834522d6484976aa6a7b38"
	I1027 19:59:30.544850  454583 cri.go:89] found id: "8da113f89d96b45a2ba55effd9ad48b2c52db76a2494916df6764912cbea8fcf"
	I1027 19:59:30.544853  454583 cri.go:89] found id: "8119904b23c367a5d244f25c4fe2bc1cd3d35a55a65310c3653fba1207a28c6c"
	I1027 19:59:30.544856  454583 cri.go:89] found id: "5f6e29c2f0799fd26d70aeb640fbc8515dabd349f18d85e616d9b44fa0a76304"
	I1027 19:59:30.544862  454583 cri.go:89] found id: "e1a6d4b6855d2b349dde7afdcd071acf4cd94ca737e469cfa84893b5b71d4655"
	I1027 19:59:30.544866  454583 cri.go:89] found id: "41c348ac8acf87523b5ca5a1bc063fd5887c49974ca78f9ddfd69cc2af77e23d"
	I1027 19:59:30.544869  454583 cri.go:89] found id: ""
	I1027 19:59:30.544916  454583 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 19:59:30.573625  454583 retry.go:31] will retry after 327.19932ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:59:30Z" level=error msg="open /run/runc: no such file or directory"
	I1027 19:59:30.901132  454583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:59:30.925121  454583 pause.go:52] kubelet running: false
	I1027 19:59:30.925194  454583 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 19:59:31.189398  454583 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 19:59:31.189492  454583 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 19:59:31.277633  454583 cri.go:89] found id: "955dc230362e096be0e14119979eeb4b516307eceab1bee2309c5c10aee85887"
	I1027 19:59:31.277659  454583 cri.go:89] found id: "489f480edd095f2ec8dafa5787de84eb1a9ed7d0820e496497cd82557cb54df5"
	I1027 19:59:31.277664  454583 cri.go:89] found id: "4d661e9051c59fc87cc15877e6a8f433cac6f4f3e16430714cea6c010b259343"
	I1027 19:59:31.277668  454583 cri.go:89] found id: "499383b8d8fc1d11df4a7905d477e6e6830cc86dd0bc67d3eb588824cec2dc07"
	I1027 19:59:31.277672  454583 cri.go:89] found id: "7a2d5a71f412b243de7e7e81e23b51bc4017375d2bae9648942dc2819590c31d"
	I1027 19:59:31.277675  454583 cri.go:89] found id: "4191fbc773c7860df74d9c43a79eaa2b2c2fedf87a834522d6484976aa6a7b38"
	I1027 19:59:31.277679  454583 cri.go:89] found id: "8da113f89d96b45a2ba55effd9ad48b2c52db76a2494916df6764912cbea8fcf"
	I1027 19:59:31.277682  454583 cri.go:89] found id: "8119904b23c367a5d244f25c4fe2bc1cd3d35a55a65310c3653fba1207a28c6c"
	I1027 19:59:31.277685  454583 cri.go:89] found id: "5f6e29c2f0799fd26d70aeb640fbc8515dabd349f18d85e616d9b44fa0a76304"
	I1027 19:59:31.277692  454583 cri.go:89] found id: "e1a6d4b6855d2b349dde7afdcd071acf4cd94ca737e469cfa84893b5b71d4655"
	I1027 19:59:31.277695  454583 cri.go:89] found id: "41c348ac8acf87523b5ca5a1bc063fd5887c49974ca78f9ddfd69cc2af77e23d"
	I1027 19:59:31.277698  454583 cri.go:89] found id: ""
	I1027 19:59:31.277760  454583 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 19:59:31.290090  454583 retry.go:31] will retry after 405.090321ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:59:31Z" level=error msg="open /run/runc: no such file or directory"
	I1027 19:59:31.695386  454583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:59:31.710609  454583 pause.go:52] kubelet running: false
	I1027 19:59:31.710668  454583 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 19:59:31.932913  454583 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 19:59:31.932988  454583 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 19:59:32.045923  454583 cri.go:89] found id: "955dc230362e096be0e14119979eeb4b516307eceab1bee2309c5c10aee85887"
	I1027 19:59:32.045951  454583 cri.go:89] found id: "489f480edd095f2ec8dafa5787de84eb1a9ed7d0820e496497cd82557cb54df5"
	I1027 19:59:32.045957  454583 cri.go:89] found id: "4d661e9051c59fc87cc15877e6a8f433cac6f4f3e16430714cea6c010b259343"
	I1027 19:59:32.045961  454583 cri.go:89] found id: "499383b8d8fc1d11df4a7905d477e6e6830cc86dd0bc67d3eb588824cec2dc07"
	I1027 19:59:32.045964  454583 cri.go:89] found id: "7a2d5a71f412b243de7e7e81e23b51bc4017375d2bae9648942dc2819590c31d"
	I1027 19:59:32.045976  454583 cri.go:89] found id: "4191fbc773c7860df74d9c43a79eaa2b2c2fedf87a834522d6484976aa6a7b38"
	I1027 19:59:32.045979  454583 cri.go:89] found id: "8da113f89d96b45a2ba55effd9ad48b2c52db76a2494916df6764912cbea8fcf"
	I1027 19:59:32.045982  454583 cri.go:89] found id: "8119904b23c367a5d244f25c4fe2bc1cd3d35a55a65310c3653fba1207a28c6c"
	I1027 19:59:32.045985  454583 cri.go:89] found id: "5f6e29c2f0799fd26d70aeb640fbc8515dabd349f18d85e616d9b44fa0a76304"
	I1027 19:59:32.045994  454583 cri.go:89] found id: "e1a6d4b6855d2b349dde7afdcd071acf4cd94ca737e469cfa84893b5b71d4655"
	I1027 19:59:32.045997  454583 cri.go:89] found id: "41c348ac8acf87523b5ca5a1bc063fd5887c49974ca78f9ddfd69cc2af77e23d"
	I1027 19:59:32.046000  454583 cri.go:89] found id: ""
	I1027 19:59:32.046060  454583 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 19:59:32.071062  454583 out.go:203] 
	W1027 19:59:32.072652  454583 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:59:32Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:59:32Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 19:59:32.072675  454583 out.go:285] * 
	* 
	W1027 19:59:32.082662  454583 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 19:59:32.087375  454583 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-942644 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-942644
helpers_test.go:243: (dbg) docker inspect old-k8s-version-942644:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "10950a3c65bf121769a3f4633bb765a9cdb0464d8195fe6bebdec626b5deca5a",
	        "Created": "2025-10-27T19:57:02.220286943Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 448778,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T19:58:22.815853824Z",
	            "FinishedAt": "2025-10-27T19:58:21.990485962Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/10950a3c65bf121769a3f4633bb765a9cdb0464d8195fe6bebdec626b5deca5a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/10950a3c65bf121769a3f4633bb765a9cdb0464d8195fe6bebdec626b5deca5a/hostname",
	        "HostsPath": "/var/lib/docker/containers/10950a3c65bf121769a3f4633bb765a9cdb0464d8195fe6bebdec626b5deca5a/hosts",
	        "LogPath": "/var/lib/docker/containers/10950a3c65bf121769a3f4633bb765a9cdb0464d8195fe6bebdec626b5deca5a/10950a3c65bf121769a3f4633bb765a9cdb0464d8195fe6bebdec626b5deca5a-json.log",
	        "Name": "/old-k8s-version-942644",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-942644:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-942644",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "10950a3c65bf121769a3f4633bb765a9cdb0464d8195fe6bebdec626b5deca5a",
	                "LowerDir": "/var/lib/docker/overlay2/ac29eb6ae0b9a6af31ee0c5db23452fde2f2593679f1a0a32d72317c10177f51-init/diff:/var/lib/docker/overlay2/0567218bbe05e8b59b2aef7ad82032d6716040c1f9d9d73e7de8682101079f76/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ac29eb6ae0b9a6af31ee0c5db23452fde2f2593679f1a0a32d72317c10177f51/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ac29eb6ae0b9a6af31ee0c5db23452fde2f2593679f1a0a32d72317c10177f51/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ac29eb6ae0b9a6af31ee0c5db23452fde2f2593679f1a0a32d72317c10177f51/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-942644",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-942644/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-942644",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-942644",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-942644",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f6088abc4e01f24e2b6ae491e6149a5f7e5e06a8e864997679892367c0ffea3c",
	            "SandboxKey": "/var/run/docker/netns/f6088abc4e01",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33413"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33414"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33417"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33415"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33416"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-942644": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:a4:45:d5:16:28",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7843814e6694ec1a5f4ec1f5c9fd29cf174989ede4bbc78e0ebce293c1be9090",
	                    "EndpointID": "39156859c4604efd2df2863c5e3925de2fea1de439a49b4a07789c5df04f2813",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-942644",
	                        "10950a3c65bf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-942644 -n old-k8s-version-942644
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-942644 -n old-k8s-version-942644: exit status 2 (456.940315ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-942644 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-942644 logs -n 25: (1.542513887s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-750423 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-750423             │ jenkins │ v1.37.0 │ 27 Oct 25 19:54 UTC │                     │
	│ ssh     │ -p cilium-750423 sudo containerd config dump                                                                                                                                                                                                  │ cilium-750423             │ jenkins │ v1.37.0 │ 27 Oct 25 19:54 UTC │                     │
	│ ssh     │ -p cilium-750423 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-750423             │ jenkins │ v1.37.0 │ 27 Oct 25 19:54 UTC │                     │
	│ ssh     │ -p cilium-750423 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-750423             │ jenkins │ v1.37.0 │ 27 Oct 25 19:54 UTC │                     │
	│ ssh     │ -p cilium-750423 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-750423             │ jenkins │ v1.37.0 │ 27 Oct 25 19:54 UTC │                     │
	│ ssh     │ -p cilium-750423 sudo crio config                                                                                                                                                                                                             │ cilium-750423             │ jenkins │ v1.37.0 │ 27 Oct 25 19:54 UTC │                     │
	│ delete  │ -p cilium-750423                                                                                                                                                                                                                              │ cilium-750423             │ jenkins │ v1.37.0 │ 27 Oct 25 19:54 UTC │ 27 Oct 25 19:54 UTC │
	│ start   │ -p force-systemd-env-105360 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-105360  │ jenkins │ v1.37.0 │ 27 Oct 25 19:54 UTC │ 27 Oct 25 19:55 UTC │
	│ delete  │ -p force-systemd-env-105360                                                                                                                                                                                                                   │ force-systemd-env-105360  │ jenkins │ v1.37.0 │ 27 Oct 25 19:55 UTC │ 27 Oct 25 19:55 UTC │
	│ start   │ -p cert-expiration-280013 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-280013    │ jenkins │ v1.37.0 │ 27 Oct 25 19:55 UTC │ 27 Oct 25 19:55 UTC │
	│ delete  │ -p kubernetes-upgrade-524430                                                                                                                                                                                                                  │ kubernetes-upgrade-524430 │ jenkins │ v1.37.0 │ 27 Oct 25 19:56 UTC │ 27 Oct 25 19:56 UTC │
	│ start   │ -p cert-options-319273 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-319273       │ jenkins │ v1.37.0 │ 27 Oct 25 19:56 UTC │ 27 Oct 25 19:56 UTC │
	│ ssh     │ cert-options-319273 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-319273       │ jenkins │ v1.37.0 │ 27 Oct 25 19:56 UTC │ 27 Oct 25 19:56 UTC │
	│ ssh     │ -p cert-options-319273 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-319273       │ jenkins │ v1.37.0 │ 27 Oct 25 19:56 UTC │ 27 Oct 25 19:56 UTC │
	│ delete  │ -p cert-options-319273                                                                                                                                                                                                                        │ cert-options-319273       │ jenkins │ v1.37.0 │ 27 Oct 25 19:56 UTC │ 27 Oct 25 19:56 UTC │
	│ start   │ -p old-k8s-version-942644 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-942644    │ jenkins │ v1.37.0 │ 27 Oct 25 19:56 UTC │ 27 Oct 25 19:57 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-942644 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-942644    │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │                     │
	│ stop    │ -p old-k8s-version-942644 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-942644    │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │ 27 Oct 25 19:58 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-942644 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-942644    │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │ 27 Oct 25 19:58 UTC │
	│ start   │ -p old-k8s-version-942644 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-942644    │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │ 27 Oct 25 19:59 UTC │
	│ start   │ -p cert-expiration-280013 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-280013    │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │ 27 Oct 25 19:58 UTC │
	│ delete  │ -p cert-expiration-280013                                                                                                                                                                                                                     │ cert-expiration-280013    │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │ 27 Oct 25 19:59 UTC │
	│ start   │ -p no-preload-300878 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-300878         │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │                     │
	│ image   │ old-k8s-version-942644 image list --format=json                                                                                                                                                                                               │ old-k8s-version-942644    │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │ 27 Oct 25 19:59 UTC │
	│ pause   │ -p old-k8s-version-942644 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-942644    │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 19:59:01
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 19:59:01.157842  452092 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:59:01.158049  452092 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:59:01.158079  452092 out.go:374] Setting ErrFile to fd 2...
	I1027 19:59:01.158101  452092 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:59:01.158445  452092 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 19:59:01.159038  452092 out.go:368] Setting JSON to false
	I1027 19:59:01.163718  452092 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9694,"bootTime":1761585448,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1027 19:59:01.163849  452092 start.go:141] virtualization:  
	I1027 19:59:01.169861  452092 out.go:179] * [no-preload-300878] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 19:59:01.173058  452092 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 19:59:01.173094  452092 notify.go:220] Checking for updates...
	I1027 19:59:01.179117  452092 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 19:59:01.182180  452092 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 19:59:01.185123  452092 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-266035/.minikube
	I1027 19:59:01.188218  452092 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 19:59:01.191191  452092 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 19:59:01.194660  452092 config.go:182] Loaded profile config "old-k8s-version-942644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1027 19:59:01.194865  452092 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 19:59:01.243211  452092 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 19:59:01.243417  452092 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:59:01.343143  452092 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-27 19:59:01.329749253 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 19:59:01.343241  452092 docker.go:318] overlay module found
	I1027 19:59:01.347639  452092 out.go:179] * Using the docker driver based on user configuration
	I1027 19:59:01.351836  452092 start.go:305] selected driver: docker
	I1027 19:59:01.351864  452092 start.go:925] validating driver "docker" against <nil>
	I1027 19:59:01.351884  452092 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 19:59:01.352827  452092 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:59:01.468281  452092 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-27 19:59:01.455404823 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 19:59:01.468445  452092 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1027 19:59:01.468691  452092 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 19:59:01.472171  452092 out.go:179] * Using Docker driver with root privileges
	I1027 19:59:01.475214  452092 cni.go:84] Creating CNI manager for ""
	I1027 19:59:01.475292  452092 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:59:01.475310  452092 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 19:59:01.475394  452092 start.go:349] cluster config:
	{Name:no-preload-300878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-300878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:59:01.478716  452092 out.go:179] * Starting "no-preload-300878" primary control-plane node in "no-preload-300878" cluster
	I1027 19:59:01.481752  452092 cache.go:123] Beginning downloading kic base image for docker with crio
	I1027 19:59:01.484862  452092 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 19:59:01.487862  452092 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:59:01.487953  452092 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 19:59:01.488012  452092 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/config.json ...
	I1027 19:59:01.488045  452092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/config.json: {Name:mkbe34231d31e2da01fa535a1b181a68e268e53e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:59:01.488269  452092 cache.go:107] acquiring lock: {Name:mk2c9b32a28909ddde1ea9e1562c451629f3a8bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:59:01.488329  452092 cache.go:115] /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1027 19:59:01.488343  452092 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 82.065µs
	I1027 19:59:01.488351  452092 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1027 19:59:01.488363  452092 cache.go:107] acquiring lock: {Name:mk41739ca1e3ab4374125f086ea6ae568ba48650 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:59:01.488436  452092 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1027 19:59:01.488626  452092 cache.go:107] acquiring lock: {Name:mk633cfcec5e23624dd56cce5b9a2941a9eb26ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:59:01.488703  452092 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 19:59:01.488802  452092 cache.go:107] acquiring lock: {Name:mk8f67f1010641520ce2aed88e36df35defaec67 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:59:01.488884  452092 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1027 19:59:01.489027  452092 cache.go:107] acquiring lock: {Name:mk5a3679f1cf078979f9b59308ac24da693653f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:59:01.489114  452092 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1027 19:59:01.489210  452092 cache.go:107] acquiring lock: {Name:mk6af7dde40e27f19a53963487980377af2c3c95 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:59:01.489278  452092 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1027 19:59:01.489380  452092 cache.go:107] acquiring lock: {Name:mk263e9fca65865b31b3432ab012737135a60a06 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:59:01.489448  452092 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1027 19:59:01.489529  452092 cache.go:107] acquiring lock: {Name:mkfced02b35956836ba86d3e97965fe21c458ea4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:59:01.489601  452092 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1027 19:59:01.492385  452092 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1027 19:59:01.492949  452092 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1027 19:59:01.493152  452092 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1027 19:59:01.493313  452092 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1027 19:59:01.493775  452092 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1027 19:59:01.493957  452092 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 19:59:01.495116  452092 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1027 19:59:01.523167  452092 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 19:59:01.523192  452092 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 19:59:01.523206  452092 cache.go:232] Successfully downloaded all kic artifacts
	I1027 19:59:01.523228  452092 start.go:360] acquireMachinesLock for no-preload-300878: {Name:mk35847aee9eb4cb8c66d589a420d0e6e5324ab7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:59:01.523347  452092 start.go:364] duration metric: took 95.554µs to acquireMachinesLock for "no-preload-300878"
	I1027 19:59:01.523380  452092 start.go:93] Provisioning new machine with config: &{Name:no-preload-300878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-300878 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 19:59:01.523444  452092 start.go:125] createHost starting for "" (driver="docker")
	W1027 19:58:57.694135  448653 pod_ready.go:104] pod "coredns-5dd5756b68-fzdkv" is not "Ready", error: <nil>
	W1027 19:58:59.698811  448653 pod_ready.go:104] pod "coredns-5dd5756b68-fzdkv" is not "Ready", error: <nil>
	W1027 19:59:02.195309  448653 pod_ready.go:104] pod "coredns-5dd5756b68-fzdkv" is not "Ready", error: <nil>
	I1027 19:59:01.536136  452092 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1027 19:59:01.536416  452092 start.go:159] libmachine.API.Create for "no-preload-300878" (driver="docker")
	I1027 19:59:01.536450  452092 client.go:168] LocalClient.Create starting
	I1027 19:59:01.536508  452092 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem
	I1027 19:59:01.536544  452092 main.go:141] libmachine: Decoding PEM data...
	I1027 19:59:01.536558  452092 main.go:141] libmachine: Parsing certificate...
	I1027 19:59:01.536611  452092 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem
	I1027 19:59:01.536626  452092 main.go:141] libmachine: Decoding PEM data...
	I1027 19:59:01.536636  452092 main.go:141] libmachine: Parsing certificate...
	I1027 19:59:01.536968  452092 cli_runner.go:164] Run: docker network inspect no-preload-300878 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1027 19:59:01.567416  452092 cli_runner.go:211] docker network inspect no-preload-300878 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1027 19:59:01.567496  452092 network_create.go:284] running [docker network inspect no-preload-300878] to gather additional debugging logs...
	I1027 19:59:01.567514  452092 cli_runner.go:164] Run: docker network inspect no-preload-300878
	W1027 19:59:01.587136  452092 cli_runner.go:211] docker network inspect no-preload-300878 returned with exit code 1
	I1027 19:59:01.587168  452092 network_create.go:287] error running [docker network inspect no-preload-300878]: docker network inspect no-preload-300878: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-300878 not found
	I1027 19:59:01.587181  452092 network_create.go:289] output of [docker network inspect no-preload-300878]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-300878 not found
	
	** /stderr **
	I1027 19:59:01.587297  452092 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 19:59:01.604978  452092 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-74ee89127400 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:aa:c7:29:bd:7a:a9} reservation:<nil>}
	I1027 19:59:01.605418  452092 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-9c57ca829ac8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:0a:24:08:53:d6:07} reservation:<nil>}
	I1027 19:59:01.605659  452092 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b7a7c45fd176 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f6:52:e6:cd:c8:a6} reservation:<nil>}
	I1027 19:59:01.605953  452092 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-7843814e6694 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:1a:ff:3b:a0:5e:3b} reservation:<nil>}
	I1027 19:59:01.606532  452092 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c5cf40}
	I1027 19:59:01.606561  452092 network_create.go:124] attempt to create docker network no-preload-300878 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1027 19:59:01.606666  452092 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-300878 no-preload-300878
	I1027 19:59:01.714348  452092 network_create.go:108] docker network no-preload-300878 192.168.85.0/24 created
	I1027 19:59:01.714375  452092 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-300878" container
	I1027 19:59:01.714537  452092 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1027 19:59:01.738333  452092 cli_runner.go:164] Run: docker volume create no-preload-300878 --label name.minikube.sigs.k8s.io=no-preload-300878 --label created_by.minikube.sigs.k8s.io=true
	I1027 19:59:01.764779  452092 oci.go:103] Successfully created a docker volume no-preload-300878
	I1027 19:59:01.764863  452092 cli_runner.go:164] Run: docker run --rm --name no-preload-300878-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-300878 --entrypoint /usr/bin/test -v no-preload-300878:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1027 19:59:01.844597  452092 cache.go:162] opening:  /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1027 19:59:01.864714  452092 cache.go:162] opening:  /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1027 19:59:01.865364  452092 cache.go:162] opening:  /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1027 19:59:01.872068  452092 cache.go:162] opening:  /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1027 19:59:01.887005  452092 cache.go:162] opening:  /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1027 19:59:01.892609  452092 cache.go:162] opening:  /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1027 19:59:01.901224  452092 cache.go:162] opening:  /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1027 19:59:01.916371  452092 cache.go:157] /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1027 19:59:01.916394  452092 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 427.18528ms
	I1027 19:59:01.916406  452092 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1027 19:59:02.198571  452092 cache.go:157] /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1027 19:59:02.198602  452092 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 709.577405ms
	I1027 19:59:02.198613  452092 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1027 19:59:02.777073  452092 cli_runner.go:217] Completed: docker run --rm --name no-preload-300878-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-300878 --entrypoint /usr/bin/test -v no-preload-300878:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (1.012153823s)
	I1027 19:59:02.777104  452092 oci.go:107] Successfully prepared a docker volume no-preload-300878
	I1027 19:59:02.777119  452092 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1027 19:59:02.777326  452092 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1027 19:59:02.777470  452092 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1027 19:59:02.972550  452092 cache.go:157] /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1027 19:59:02.972945  452092 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.483412222s
	I1027 19:59:02.972995  452092 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1027 19:59:03.003583  452092 cache.go:157] /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1027 19:59:03.003863  452092 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.515063841s
	I1027 19:59:03.003915  452092 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1027 19:59:03.024136  452092 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-300878 --name no-preload-300878 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-300878 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-300878 --network no-preload-300878 --ip 192.168.85.2 --volume no-preload-300878:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1027 19:59:03.055728  452092 cache.go:157] /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1027 19:59:03.055809  452092 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.567184429s
	I1027 19:59:03.055861  452092 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1027 19:59:03.123305  452092 cache.go:157] /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1027 19:59:03.123379  452092 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.635015531s
	I1027 19:59:03.123410  452092 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1027 19:59:03.778023  452092 cli_runner.go:164] Run: docker container inspect no-preload-300878 --format={{.State.Running}}
	I1027 19:59:03.801883  452092 cli_runner.go:164] Run: docker container inspect no-preload-300878 --format={{.State.Status}}
	I1027 19:59:03.844781  452092 cli_runner.go:164] Run: docker exec no-preload-300878 stat /var/lib/dpkg/alternatives/iptables
	I1027 19:59:03.927698  452092 oci.go:144] the created container "no-preload-300878" has a running status.
	I1027 19:59:03.927774  452092 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21801-266035/.minikube/machines/no-preload-300878/id_rsa...
	I1027 19:59:04.133198  452092 cache.go:157] /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1027 19:59:04.133289  452092 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 2.643909246s
	I1027 19:59:04.133361  452092 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1027 19:59:04.133393  452092 cache.go:87] Successfully saved all images to host disk.
	I1027 19:59:04.776875  452092 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21801-266035/.minikube/machines/no-preload-300878/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1027 19:59:04.807530  452092 cli_runner.go:164] Run: docker container inspect no-preload-300878 --format={{.State.Status}}
	I1027 19:59:04.832360  452092 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1027 19:59:04.832376  452092 kic_runner.go:114] Args: [docker exec --privileged no-preload-300878 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1027 19:59:04.881166  452092 cli_runner.go:164] Run: docker container inspect no-preload-300878 --format={{.State.Status}}
	I1027 19:59:04.911217  452092 machine.go:93] provisionDockerMachine start ...
	I1027 19:59:04.911323  452092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-300878
	I1027 19:59:04.935328  452092 main.go:141] libmachine: Using SSH client type: native
	I1027 19:59:04.935654  452092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1027 19:59:04.935665  452092 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 19:59:04.936318  452092 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60050->127.0.0.1:33418: read: connection reset by peer
	W1027 19:59:04.197347  448653 pod_ready.go:104] pod "coredns-5dd5756b68-fzdkv" is not "Ready", error: <nil>
	W1027 19:59:06.688604  448653 pod_ready.go:104] pod "coredns-5dd5756b68-fzdkv" is not "Ready", error: <nil>
	I1027 19:59:08.096273  452092 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-300878
	
	I1027 19:59:08.096349  452092 ubuntu.go:182] provisioning hostname "no-preload-300878"
	I1027 19:59:08.096471  452092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-300878
	I1027 19:59:08.118163  452092 main.go:141] libmachine: Using SSH client type: native
	I1027 19:59:08.118507  452092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1027 19:59:08.118519  452092 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-300878 && echo "no-preload-300878" | sudo tee /etc/hostname
	I1027 19:59:08.290031  452092 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-300878
	
	I1027 19:59:08.290122  452092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-300878
	I1027 19:59:08.311529  452092 main.go:141] libmachine: Using SSH client type: native
	I1027 19:59:08.311871  452092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1027 19:59:08.311892  452092 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-300878' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-300878/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-300878' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 19:59:08.463173  452092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 19:59:08.463202  452092 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21801-266035/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-266035/.minikube}
	I1027 19:59:08.463230  452092 ubuntu.go:190] setting up certificates
	I1027 19:59:08.463241  452092 provision.go:84] configureAuth start
	I1027 19:59:08.463306  452092 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-300878
	I1027 19:59:08.480197  452092 provision.go:143] copyHostCerts
	I1027 19:59:08.480259  452092 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem, removing ...
	I1027 19:59:08.480272  452092 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem
	I1027 19:59:08.480350  452092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem (1078 bytes)
	I1027 19:59:08.480457  452092 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem, removing ...
	I1027 19:59:08.480468  452092 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem
	I1027 19:59:08.480494  452092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem (1123 bytes)
	I1027 19:59:08.480550  452092 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem, removing ...
	I1027 19:59:08.480558  452092 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem
	I1027 19:59:08.480582  452092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem (1675 bytes)
	I1027 19:59:08.480631  452092 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem org=jenkins.no-preload-300878 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-300878]
	I1027 19:59:09.185786  452092 provision.go:177] copyRemoteCerts
	I1027 19:59:09.185856  452092 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 19:59:09.185901  452092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-300878
	I1027 19:59:09.203971  452092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/no-preload-300878/id_rsa Username:docker}
	I1027 19:59:09.316689  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 19:59:09.337523  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1027 19:59:09.357306  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1027 19:59:09.376600  452092 provision.go:87] duration metric: took 913.333326ms to configureAuth
	I1027 19:59:09.376629  452092 ubuntu.go:206] setting minikube options for container-runtime
	I1027 19:59:09.376816  452092 config.go:182] Loaded profile config "no-preload-300878": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:59:09.376922  452092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-300878
	I1027 19:59:09.394762  452092 main.go:141] libmachine: Using SSH client type: native
	I1027 19:59:09.395113  452092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1027 19:59:09.395133  452092 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 19:59:09.682560  452092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 19:59:09.682582  452092 machine.go:96] duration metric: took 4.771346115s to provisionDockerMachine
	I1027 19:59:09.682614  452092 client.go:171] duration metric: took 8.146156918s to LocalClient.Create
	I1027 19:59:09.682628  452092 start.go:167] duration metric: took 8.146214681s to libmachine.API.Create "no-preload-300878"
	I1027 19:59:09.682635  452092 start.go:293] postStartSetup for "no-preload-300878" (driver="docker")
	I1027 19:59:09.682645  452092 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 19:59:09.682721  452092 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 19:59:09.682784  452092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-300878
	I1027 19:59:09.703935  452092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/no-preload-300878/id_rsa Username:docker}
	I1027 19:59:09.811601  452092 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 19:59:09.815314  452092 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 19:59:09.815345  452092 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 19:59:09.815356  452092 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-266035/.minikube/addons for local assets ...
	I1027 19:59:09.815417  452092 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-266035/.minikube/files for local assets ...
	I1027 19:59:09.815508  452092 filesync.go:149] local asset: /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem -> 2678802.pem in /etc/ssl/certs
	I1027 19:59:09.815612  452092 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 19:59:09.823208  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem --> /etc/ssl/certs/2678802.pem (1708 bytes)
	I1027 19:59:09.841509  452092 start.go:296] duration metric: took 158.859008ms for postStartSetup
	I1027 19:59:09.841872  452092 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-300878
	I1027 19:59:09.861063  452092 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/config.json ...
	I1027 19:59:09.861361  452092 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 19:59:09.861409  452092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-300878
	I1027 19:59:09.879329  452092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/no-preload-300878/id_rsa Username:docker}
	I1027 19:59:09.985422  452092 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 19:59:09.990223  452092 start.go:128] duration metric: took 8.466763878s to createHost
	I1027 19:59:09.990245  452092 start.go:83] releasing machines lock for "no-preload-300878", held for 8.466882955s
	I1027 19:59:09.990314  452092 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-300878
	I1027 19:59:10.018343  452092 ssh_runner.go:195] Run: cat /version.json
	I1027 19:59:10.018411  452092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-300878
	I1027 19:59:10.018721  452092 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 19:59:10.018793  452092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-300878
	I1027 19:59:10.044551  452092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/no-preload-300878/id_rsa Username:docker}
	I1027 19:59:10.060740  452092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/no-preload-300878/id_rsa Username:docker}
	I1027 19:59:10.271681  452092 ssh_runner.go:195] Run: systemctl --version
	I1027 19:59:10.278103  452092 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 19:59:10.320520  452092 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 19:59:10.324807  452092 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 19:59:10.324874  452092 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 19:59:10.356558  452092 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1027 19:59:10.356578  452092 start.go:495] detecting cgroup driver to use...
	I1027 19:59:10.356609  452092 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1027 19:59:10.356661  452092 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 19:59:10.374758  452092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 19:59:10.387568  452092 docker.go:218] disabling cri-docker service (if available) ...
	I1027 19:59:10.387661  452092 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 19:59:10.404850  452092 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 19:59:10.426750  452092 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 19:59:10.550065  452092 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 19:59:10.682872  452092 docker.go:234] disabling docker service ...
	I1027 19:59:10.683021  452092 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 19:59:10.708417  452092 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 19:59:10.722337  452092 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 19:59:10.843257  452092 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 19:59:10.970779  452092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 19:59:10.983842  452092 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 19:59:10.997119  452092 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 19:59:10.997183  452092 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:59:11.007836  452092 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 19:59:11.007986  452092 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:59:11.017966  452092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:59:11.027642  452092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:59:11.037911  452092 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 19:59:11.046380  452092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:59:11.056640  452092 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:59:11.072042  452092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:59:11.081581  452092 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 19:59:11.091201  452092 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 19:59:11.099360  452092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:59:11.232857  452092 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 19:59:11.361090  452092 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 19:59:11.361204  452092 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 19:59:11.365642  452092 start.go:563] Will wait 60s for crictl version
	I1027 19:59:11.365734  452092 ssh_runner.go:195] Run: which crictl
	I1027 19:59:11.369456  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 19:59:11.397277  452092 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 19:59:11.397399  452092 ssh_runner.go:195] Run: crio --version
	I1027 19:59:11.424857  452092 ssh_runner.go:195] Run: crio --version
	I1027 19:59:11.455658  452092 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1027 19:59:08.688647  448653 pod_ready.go:104] pod "coredns-5dd5756b68-fzdkv" is not "Ready", error: <nil>
	W1027 19:59:11.189311  448653 pod_ready.go:104] pod "coredns-5dd5756b68-fzdkv" is not "Ready", error: <nil>
	I1027 19:59:11.458593  452092 cli_runner.go:164] Run: docker network inspect no-preload-300878 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 19:59:11.475182  452092 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1027 19:59:11.479166  452092 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 19:59:11.489574  452092 kubeadm.go:883] updating cluster {Name:no-preload-300878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-300878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 19:59:11.489691  452092 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:59:11.489750  452092 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 19:59:11.515003  452092 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1027 19:59:11.515026  452092 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1027 19:59:11.515064  452092 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:59:11.515278  452092 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1027 19:59:11.515405  452092 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 19:59:11.515514  452092 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1027 19:59:11.515624  452092 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1027 19:59:11.515734  452092 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1027 19:59:11.515836  452092 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1027 19:59:11.515927  452092 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1027 19:59:11.517051  452092 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1027 19:59:11.517294  452092 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1027 19:59:11.517460  452092 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1027 19:59:11.517633  452092 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 19:59:11.517797  452092 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:59:11.518094  452092 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1027 19:59:11.518345  452092 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1027 19:59:11.518547  452092 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1027 19:59:11.746357  452092 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1027 19:59:11.750661  452092 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1027 19:59:11.752231  452092 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1027 19:59:11.752451  452092 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1027 19:59:11.759850  452092 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1027 19:59:11.761757  452092 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 19:59:11.763974  452092 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1027 19:59:11.842950  452092 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1027 19:59:11.843007  452092 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1027 19:59:11.843114  452092 ssh_runner.go:195] Run: which crictl
	I1027 19:59:11.861841  452092 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1027 19:59:11.861885  452092 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1027 19:59:11.861969  452092 ssh_runner.go:195] Run: which crictl
	I1027 19:59:11.892712  452092 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1027 19:59:11.892804  452092 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1027 19:59:11.892881  452092 ssh_runner.go:195] Run: which crictl
	I1027 19:59:11.892987  452092 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1027 19:59:11.893063  452092 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1027 19:59:11.893119  452092 ssh_runner.go:195] Run: which crictl
	I1027 19:59:11.934336  452092 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1027 19:59:11.934424  452092 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1027 19:59:11.934501  452092 ssh_runner.go:195] Run: which crictl
	I1027 19:59:11.934631  452092 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1027 19:59:11.934790  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1027 19:59:11.934797  452092 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 19:59:11.934898  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1027 19:59:11.934940  452092 ssh_runner.go:195] Run: which crictl
	I1027 19:59:11.934723  452092 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1027 19:59:11.935005  452092 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1027 19:59:11.935047  452092 ssh_runner.go:195] Run: which crictl
	I1027 19:59:11.935093  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1027 19:59:11.935068  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1027 19:59:12.027921  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1027 19:59:12.028020  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 19:59:12.028070  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1027 19:59:12.028122  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1027 19:59:12.028180  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1027 19:59:12.043904  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1027 19:59:12.043998  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1027 19:59:12.130737  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1027 19:59:12.130816  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1027 19:59:12.130899  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 19:59:12.130972  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1027 19:59:12.131057  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1027 19:59:12.172556  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1027 19:59:12.172738  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1027 19:59:12.267507  452092 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1027 19:59:12.267594  452092 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1027 19:59:12.267638  452092 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1027 19:59:12.267682  452092 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1027 19:59:12.267755  452092 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1027 19:59:12.267798  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 19:59:12.267855  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1027 19:59:12.267607  452092 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1027 19:59:12.267975  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1027 19:59:12.268020  452092 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1027 19:59:12.268095  452092 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1027 19:59:12.339664  452092 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1027 19:59:12.339708  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1027 19:59:12.339759  452092 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1027 19:59:12.339834  452092 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1027 19:59:12.339918  452092 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1027 19:59:12.339970  452092 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1027 19:59:12.340027  452092 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1027 19:59:12.340045  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1027 19:59:12.339783  452092 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1027 19:59:12.340068  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1027 19:59:12.339983  452092 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1027 19:59:12.339812  452092 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1027 19:59:12.340143  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1027 19:59:12.340151  452092 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1027 19:59:12.394369  452092 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1027 19:59:12.394456  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1027 19:59:12.394555  452092 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1027 19:59:12.394606  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1027 19:59:12.394685  452092 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1027 19:59:12.394721  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1027 19:59:12.422348  452092 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1027 19:59:12.422468  452092 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	W1027 19:59:12.772849  452092 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1027 19:59:12.773090  452092 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:59:12.805491  452092 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1027 19:59:12.946045  452092 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1027 19:59:12.946151  452092 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1027 19:59:12.997829  452092 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1027 19:59:12.997897  452092 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:59:12.997971  452092 ssh_runner.go:195] Run: which crictl
	I1027 19:59:14.738971  452092 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.79279234s)
	I1027 19:59:14.739026  452092 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1027 19:59:14.739043  452092 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1027 19:59:14.739092  452092 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1027 19:59:14.739159  452092 ssh_runner.go:235] Completed: which crictl: (1.741173521s)
	I1027 19:59:14.739186  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:59:16.128648  452092 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.38943922s)
	I1027 19:59:16.128720  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:59:16.128792  452092 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.389685671s)
	I1027 19:59:16.128803  452092 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1027 19:59:16.128821  452092 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1027 19:59:16.128842  452092 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	W1027 19:59:13.689245  448653 pod_ready.go:104] pod "coredns-5dd5756b68-fzdkv" is not "Ready", error: <nil>
	I1027 19:59:16.188527  448653 pod_ready.go:94] pod "coredns-5dd5756b68-fzdkv" is "Ready"
	I1027 19:59:16.188549  448653 pod_ready.go:86] duration metric: took 39.006015866s for pod "coredns-5dd5756b68-fzdkv" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:59:16.192158  448653 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-942644" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:59:16.197651  448653 pod_ready.go:94] pod "etcd-old-k8s-version-942644" is "Ready"
	I1027 19:59:16.197674  448653 pod_ready.go:86] duration metric: took 5.493048ms for pod "etcd-old-k8s-version-942644" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:59:16.202449  448653 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-942644" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:59:16.208197  448653 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-942644" is "Ready"
	I1027 19:59:16.208221  448653 pod_ready.go:86] duration metric: took 5.75041ms for pod "kube-apiserver-old-k8s-version-942644" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:59:16.211605  448653 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-942644" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:59:16.386598  448653 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-942644" is "Ready"
	I1027 19:59:16.386674  448653 pod_ready.go:86] duration metric: took 174.996564ms for pod "kube-controller-manager-old-k8s-version-942644" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:59:16.586901  448653 pod_ready.go:83] waiting for pod "kube-proxy-nbdp5" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:59:16.985829  448653 pod_ready.go:94] pod "kube-proxy-nbdp5" is "Ready"
	I1027 19:59:16.985905  448653 pod_ready.go:86] duration metric: took 398.980669ms for pod "kube-proxy-nbdp5" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:59:17.186968  448653 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-942644" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:59:17.586354  448653 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-942644" is "Ready"
	I1027 19:59:17.586384  448653 pod_ready.go:86] duration metric: took 399.371247ms for pod "kube-scheduler-old-k8s-version-942644" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:59:17.586397  448653 pod_ready.go:40] duration metric: took 40.40843224s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 19:59:17.650879  448653 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1027 19:59:17.654505  448653 out.go:203] 
	W1027 19:59:17.657531  448653 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1027 19:59:17.660492  448653 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1027 19:59:17.663582  448653 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-942644" cluster and "default" namespace by default
	I1027 19:59:18.049845  452092 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.920982266s)
	I1027 19:59:18.049871  452092 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1027 19:59:18.049889  452092 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1027 19:59:18.049937  452092 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1027 19:59:18.050005  452092 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.921273891s)
	I1027 19:59:18.050046  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:59:19.276461  452092 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.226390115s)
	I1027 19:59:19.276501  452092 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1027 19:59:19.276587  452092 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1027 19:59:19.276664  452092 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.226709827s)
	I1027 19:59:19.276698  452092 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1027 19:59:19.276724  452092 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1027 19:59:19.276784  452092 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1027 19:59:20.646426  452092 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.369613384s)
	I1027 19:59:20.646455  452092 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1027 19:59:20.646472  452092 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1027 19:59:20.646531  452092 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1027 19:59:20.646608  452092 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.370010525s)
	I1027 19:59:20.646629  452092 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1027 19:59:20.646645  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1027 19:59:25.103137  452092 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (4.456579429s)
	I1027 19:59:25.103216  452092 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1027 19:59:25.103256  452092 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1027 19:59:25.103333  452092 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1027 19:59:25.761149  452092 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1027 19:59:25.761188  452092 cache_images.go:124] Successfully loaded all cached images
	I1027 19:59:25.761195  452092 cache_images.go:93] duration metric: took 14.2461558s to LoadCachedImages
	I1027 19:59:25.761210  452092 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1027 19:59:25.761307  452092 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-300878 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-300878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 19:59:25.761414  452092 ssh_runner.go:195] Run: crio config
	I1027 19:59:25.820025  452092 cni.go:84] Creating CNI manager for ""
	I1027 19:59:25.820046  452092 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:59:25.820066  452092 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 19:59:25.820091  452092 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-300878 NodeName:no-preload-300878 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 19:59:25.820214  452092 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-300878"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 19:59:25.820283  452092 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 19:59:25.828701  452092 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1027 19:59:25.828835  452092 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1027 19:59:25.836143  452092 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1027 19:59:25.836302  452092 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1027 19:59:25.836748  452092 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21801-266035/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1027 19:59:25.837297  452092 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21801-266035/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1027 19:59:25.839777  452092 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1027 19:59:25.839805  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1027 19:59:26.646029  452092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:59:26.679663  452092 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1027 19:59:26.685163  452092 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1027 19:59:26.685267  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1027 19:59:26.791106  452092 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1027 19:59:26.809790  452092 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1027 19:59:26.809827  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1027 19:59:27.394438  452092 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 19:59:27.402932  452092 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1027 19:59:27.418350  452092 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 19:59:27.433554  452092 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1027 19:59:27.447982  452092 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1027 19:59:27.452570  452092 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 19:59:27.464039  452092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:59:27.588155  452092 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:59:27.604442  452092 certs.go:69] Setting up /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878 for IP: 192.168.85.2
	I1027 19:59:27.604460  452092 certs.go:195] generating shared ca certs ...
	I1027 19:59:27.604476  452092 certs.go:227] acquiring lock for ca certs: {Name:mk172548fc54811b51040cc0201d01eb4b3dd19b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:59:27.604624  452092 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21801-266035/.minikube/ca.key
	I1027 19:59:27.604664  452092 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.key
	I1027 19:59:27.604671  452092 certs.go:257] generating profile certs ...
	I1027 19:59:27.604727  452092 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/client.key
	I1027 19:59:27.604738  452092 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/client.crt with IP's: []
	I1027 19:59:29.019152  452092 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/client.crt ...
	I1027 19:59:29.019181  452092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/client.crt: {Name:mkbb5ebfa77eba8c67f308cb4fbd6c17f1555ba1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:59:29.019385  452092 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/client.key ...
	I1027 19:59:29.019399  452092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/client.key: {Name:mk6f90b47f450aad83a391991d3734dd2474ddaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:59:29.019505  452092 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/apiserver.key.f5d283a0
	I1027 19:59:29.019522  452092 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/apiserver.crt.f5d283a0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1027 19:59:30.042864  452092 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/apiserver.crt.f5d283a0 ...
	I1027 19:59:30.042952  452092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/apiserver.crt.f5d283a0: {Name:mk2d521fac2a1bf164266b65da65cfe94463b8b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:59:30.043273  452092 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/apiserver.key.f5d283a0 ...
	I1027 19:59:30.043339  452092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/apiserver.key.f5d283a0: {Name:mk304e068571c5dde5ff33b96649ff357501c5c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:59:30.043502  452092 certs.go:382] copying /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/apiserver.crt.f5d283a0 -> /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/apiserver.crt
	I1027 19:59:30.043653  452092 certs.go:386] copying /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/apiserver.key.f5d283a0 -> /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/apiserver.key
	I1027 19:59:30.043909  452092 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/proxy-client.key
	I1027 19:59:30.043965  452092 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/proxy-client.crt with IP's: []
	I1027 19:59:31.024583  452092 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/proxy-client.crt ...
	I1027 19:59:31.024664  452092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/proxy-client.crt: {Name:mk9a33b4b39534b8dc19d62ac784812003559940 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:59:31.024899  452092 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/proxy-client.key ...
	I1027 19:59:31.024942  452092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/proxy-client.key: {Name:mk993ba6b494e0fb2873af8eccf7c0ffdca1f760 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:59:31.025188  452092 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880.pem (1338 bytes)
	W1027 19:59:31.025259  452092 certs.go:480] ignoring /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880_empty.pem, impossibly tiny 0 bytes
	I1027 19:59:31.025286  452092 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 19:59:31.025349  452092 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem (1078 bytes)
	I1027 19:59:31.025406  452092 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem (1123 bytes)
	I1027 19:59:31.025452  452092 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem (1675 bytes)
	I1027 19:59:31.025536  452092 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem (1708 bytes)
	I1027 19:59:31.026125  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 19:59:31.044290  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 19:59:31.063208  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 19:59:31.085517  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 19:59:31.108677  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1027 19:59:31.141592  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 19:59:31.166046  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 19:59:31.187561  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1027 19:59:31.209137  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880.pem --> /usr/share/ca-certificates/267880.pem (1338 bytes)
	I1027 19:59:31.232183  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem --> /usr/share/ca-certificates/2678802.pem (1708 bytes)
	I1027 19:59:31.254125  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 19:59:31.276202  452092 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 19:59:31.291186  452092 ssh_runner.go:195] Run: openssl version
	I1027 19:59:31.300874  452092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/267880.pem && ln -fs /usr/share/ca-certificates/267880.pem /etc/ssl/certs/267880.pem"
	I1027 19:59:31.317061  452092 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/267880.pem
	I1027 19:59:31.321692  452092 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 19:04 /usr/share/ca-certificates/267880.pem
	I1027 19:59:31.321758  452092 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/267880.pem
	I1027 19:59:31.367739  452092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/267880.pem /etc/ssl/certs/51391683.0"
	I1027 19:59:31.376174  452092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2678802.pem && ln -fs /usr/share/ca-certificates/2678802.pem /etc/ssl/certs/2678802.pem"
	I1027 19:59:31.384685  452092 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2678802.pem
	I1027 19:59:31.388662  452092 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 19:04 /usr/share/ca-certificates/2678802.pem
	I1027 19:59:31.388750  452092 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2678802.pem
	I1027 19:59:31.429817  452092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2678802.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 19:59:31.438481  452092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 19:59:31.447109  452092 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:59:31.451417  452092 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:59:31.451505  452092 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:59:31.492735  452092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 19:59:31.501840  452092 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 19:59:31.505851  452092 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 19:59:31.505905  452092 kubeadm.go:400] StartCluster: {Name:no-preload-300878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-300878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:59:31.505990  452092 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 19:59:31.506052  452092 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 19:59:31.536539  452092 cri.go:89] found id: ""
	I1027 19:59:31.536620  452092 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 19:59:31.545247  452092 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 19:59:31.553148  452092 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1027 19:59:31.553254  452092 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 19:59:31.561851  452092 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 19:59:31.561871  452092 kubeadm.go:157] found existing configuration files:
	
	I1027 19:59:31.561924  452092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 19:59:31.570320  452092 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 19:59:31.570380  452092 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 19:59:31.578264  452092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 19:59:31.585781  452092 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 19:59:31.585845  452092 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 19:59:31.593154  452092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 19:59:31.601048  452092 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 19:59:31.601165  452092 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 19:59:31.608485  452092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 19:59:31.616099  452092 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 19:59:31.616211  452092 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 19:59:31.623930  452092 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 19:59:31.662468  452092 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1027 19:59:31.662637  452092 kubeadm.go:318] [preflight] Running pre-flight checks
	I1027 19:59:31.688962  452092 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1027 19:59:31.689052  452092 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1027 19:59:31.689097  452092 kubeadm.go:318] OS: Linux
	I1027 19:59:31.689153  452092 kubeadm.go:318] CGROUPS_CPU: enabled
	I1027 19:59:31.689212  452092 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1027 19:59:31.689269  452092 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1027 19:59:31.689327  452092 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1027 19:59:31.689385  452092 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1027 19:59:31.689443  452092 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1027 19:59:31.689497  452092 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1027 19:59:31.689555  452092 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1027 19:59:31.689611  452092 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1027 19:59:31.790015  452092 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 19:59:31.790138  452092 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 19:59:31.790238  452092 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 19:59:31.812792  452092 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	
	==> CRI-O <==
	Oct 27 19:59:11 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:11.667367405Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:59:11 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:11.675715986Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:59:11 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:11.676261324Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:59:11 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:11.692786963Z" level=info msg="Created container e1a6d4b6855d2b349dde7afdcd071acf4cd94ca737e469cfa84893b5b71d4655: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h8ggd/dashboard-metrics-scraper" id=d9de6d59-fe0a-4efb-8c4c-997044c4ad9c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:59:11 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:11.693841751Z" level=info msg="Starting container: e1a6d4b6855d2b349dde7afdcd071acf4cd94ca737e469cfa84893b5b71d4655" id=bdf409c2-91ae-418a-a2cc-7cd851470504 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:59:11 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:11.695865637Z" level=info msg="Started container" PID=1633 containerID=e1a6d4b6855d2b349dde7afdcd071acf4cd94ca737e469cfa84893b5b71d4655 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h8ggd/dashboard-metrics-scraper id=bdf409c2-91ae-418a-a2cc-7cd851470504 name=/runtime.v1.RuntimeService/StartContainer sandboxID=aa2ffa49e41e8f56c01d3c9268bdb24cdc849cae94cc26956a9ddd4817734327
	Oct 27 19:59:11 old-k8s-version-942644 conmon[1631]: conmon e1a6d4b6855d2b349dde <ninfo>: container 1633 exited with status 1
	Oct 27 19:59:11 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:11.931384857Z" level=info msg="Removing container: 1a065c2e5d2d1c32440c57f3300210cd63be6bf11ab6fd06a70652ff3395da3f" id=6642bda7-840d-424f-a54a-197522887c59 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 19:59:11 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:11.942836603Z" level=info msg="Error loading conmon cgroup of container 1a065c2e5d2d1c32440c57f3300210cd63be6bf11ab6fd06a70652ff3395da3f: cgroup deleted" id=6642bda7-840d-424f-a54a-197522887c59 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 19:59:11 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:11.949389017Z" level=info msg="Removed container 1a065c2e5d2d1c32440c57f3300210cd63be6bf11ab6fd06a70652ff3395da3f: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h8ggd/dashboard-metrics-scraper" id=6642bda7-840d-424f-a54a-197522887c59 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 19:59:16 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:16.816464825Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 19:59:16 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:16.820881874Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 19:59:16 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:16.820914562Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 19:59:16 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:16.820935197Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 19:59:16 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:16.823996633Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 19:59:16 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:16.824033727Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 19:59:16 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:16.824053107Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 19:59:16 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:16.826836569Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 19:59:16 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:16.826871407Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 19:59:16 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:16.826890082Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 19:59:16 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:16.829562983Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 19:59:16 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:16.829597156Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 19:59:16 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:16.829616306Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 19:59:16 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:16.832221911Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 19:59:16 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:16.832253688Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	e1a6d4b6855d2       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago       Exited              dashboard-metrics-scraper   2                   aa2ffa49e41e8       dashboard-metrics-scraper-5f989dc9cf-h8ggd       kubernetes-dashboard
	955dc230362e0       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           26 seconds ago       Running             storage-provisioner         2                   9be867f4f292d       storage-provisioner                              kube-system
	41c348ac8acf8       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   28 seconds ago       Running             kubernetes-dashboard        0                   6c84e15c6561d       kubernetes-dashboard-8694d4445c-5rbpv            kubernetes-dashboard
	489f480edd095       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           56 seconds ago       Running             coredns                     1                   d37801f0cbd39       coredns-5dd5756b68-fzdkv                         kube-system
	4d661e9051c59       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           56 seconds ago       Running             kindnet-cni                 1                   8cdcf0ecc0ff4       kindnet-845vr                                    kube-system
	499383b8d8fc1       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           57 seconds ago       Exited              storage-provisioner         1                   9be867f4f292d       storage-provisioner                              kube-system
	7a2d5a71f412b       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           57 seconds ago       Running             kube-proxy                  1                   2002efcc324e4       kube-proxy-nbdp5                                 kube-system
	df25b11932b65       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           57 seconds ago       Running             busybox                     1                   3eb11d6747fc5       busybox                                          default
	4191fbc773c78       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   0189770b5a579       etcd-old-k8s-version-942644                      kube-system
	8da113f89d96b       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   53a4859c0edac       kube-controller-manager-old-k8s-version-942644   kube-system
	8119904b23c36       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   bf54005ea42bb       kube-apiserver-old-k8s-version-942644            kube-system
	5f6e29c2f0799       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   d34e5f736a425       kube-scheduler-old-k8s-version-942644            kube-system
	
	
	==> coredns [489f480edd095f2ec8dafa5787de84eb1a9ed7d0820e496497cd82557cb54df5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:45562 - 13555 "HINFO IN 5364419795793254679.3340023868961547350. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017404448s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-942644
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-942644
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=old-k8s-version-942644
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T19_57_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 19:57:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-942644
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 19:59:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 19:59:05 +0000   Mon, 27 Oct 2025 19:57:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 19:59:05 +0000   Mon, 27 Oct 2025 19:57:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 19:59:05 +0000   Mon, 27 Oct 2025 19:57:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 19:59:05 +0000   Mon, 27 Oct 2025 19:57:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-942644
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                effa8846-d81b-42a0-8993-bf5b12f2eae0
	  Boot ID:                    23ea05b4-8203-4fb9-a84a-5deb4b091cbb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-5dd5756b68-fzdkv                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     112s
	  kube-system                 etcd-old-k8s-version-942644                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m8s
	  kube-system                 kindnet-845vr                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-old-k8s-version-942644             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-controller-manager-old-k8s-version-942644    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-proxy-nbdp5                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-old-k8s-version-942644             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-h8ggd        0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-5rbpv             0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 109s                   kube-proxy       
	  Normal  Starting                 56s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  2m13s (x8 over 2m13s)  kubelet          Node old-k8s-version-942644 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m13s (x8 over 2m13s)  kubelet          Node old-k8s-version-942644 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m13s (x8 over 2m13s)  kubelet          Node old-k8s-version-942644 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m5s                   kubelet          Node old-k8s-version-942644 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m5s                   kubelet          Node old-k8s-version-942644 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s                   kubelet          Node old-k8s-version-942644 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m5s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           112s                   node-controller  Node old-k8s-version-942644 event: Registered Node old-k8s-version-942644 in Controller
	  Normal  NodeReady                98s                    kubelet          Node old-k8s-version-942644 status is now: NodeReady
	  Normal  Starting                 64s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  64s (x8 over 64s)      kubelet          Node old-k8s-version-942644 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    64s (x8 over 64s)      kubelet          Node old-k8s-version-942644 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     64s (x8 over 64s)      kubelet          Node old-k8s-version-942644 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           46s                    node-controller  Node old-k8s-version-942644 event: Registered Node old-k8s-version-942644 in Controller
	
	
	==> dmesg <==
	[Oct27 19:34] overlayfs: idmapped layers are currently not supported
	[ +33.986700] overlayfs: idmapped layers are currently not supported
	[Oct27 19:36] overlayfs: idmapped layers are currently not supported
	[Oct27 19:37] overlayfs: idmapped layers are currently not supported
	[Oct27 19:38] overlayfs: idmapped layers are currently not supported
	[Oct27 19:39] overlayfs: idmapped layers are currently not supported
	[Oct27 19:40] overlayfs: idmapped layers are currently not supported
	[  +7.015891] overlayfs: idmapped layers are currently not supported
	[Oct27 19:41] overlayfs: idmapped layers are currently not supported
	[ +28.455216] overlayfs: idmapped layers are currently not supported
	[Oct27 19:42] overlayfs: idmapped layers are currently not supported
	[ +26.490174] overlayfs: idmapped layers are currently not supported
	[Oct27 19:44] overlayfs: idmapped layers are currently not supported
	[Oct27 19:45] overlayfs: idmapped layers are currently not supported
	[Oct27 19:47] overlayfs: idmapped layers are currently not supported
	[Oct27 19:49] overlayfs: idmapped layers are currently not supported
	[ +31.410335] overlayfs: idmapped layers are currently not supported
	[Oct27 19:51] overlayfs: idmapped layers are currently not supported
	[Oct27 19:53] overlayfs: idmapped layers are currently not supported
	[Oct27 19:54] overlayfs: idmapped layers are currently not supported
	[Oct27 19:55] overlayfs: idmapped layers are currently not supported
	[Oct27 19:56] overlayfs: idmapped layers are currently not supported
	[Oct27 19:57] overlayfs: idmapped layers are currently not supported
	[Oct27 19:58] overlayfs: idmapped layers are currently not supported
	[Oct27 19:59] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [4191fbc773c7860df74d9c43a79eaa2b2c2fedf87a834522d6484976aa6a7b38] <==
	{"level":"info","ts":"2025-10-27T19:58:30.632425Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-27T19:58:30.632559Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-27T19:58:30.632988Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-27T19:58:30.651152Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-27T19:58:30.651363Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-27T19:58:30.651425Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-27T19:58:30.654758Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-27T19:58:30.662867Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-27T19:58:30.662969Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-27T19:58:30.658694Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-27T19:58:30.675699Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-27T19:58:31.766802Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-27T19:58:31.766911Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-27T19:58:31.76697Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-27T19:58:31.767022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-10-27T19:58:31.767053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-27T19:58:31.767089Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-10-27T19:58:31.767131Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-27T19:58:31.776392Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-942644 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-27T19:58:31.776626Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-27T19:58:31.776747Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-27T19:58:31.777822Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-27T19:58:31.783085Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-27T19:58:31.810235Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-27T19:58:31.810345Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:59:33 up  2:42,  0 user,  load average: 3.58, 3.02, 2.57
	Linux old-k8s-version-942644 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4d661e9051c59fc87cc15877e6a8f433cac6f4f3e16430714cea6c010b259343] <==
	I1027 19:58:36.541333       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 19:58:36.542018       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1027 19:58:36.542183       1 main.go:148] setting mtu 1500 for CNI 
	I1027 19:58:36.542197       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 19:58:36.542207       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T19:58:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 19:58:36.817055       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 19:58:36.817071       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 19:58:36.817080       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 19:58:36.817179       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1027 19:59:06.816945       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1027 19:59:06.816947       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1027 19:59:06.817094       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1027 19:59:06.817170       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1027 19:59:08.318164       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 19:59:08.318197       1 metrics.go:72] Registering metrics
	I1027 19:59:08.318250       1 controller.go:711] "Syncing nftables rules"
	I1027 19:59:16.816122       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1027 19:59:16.816238       1 main.go:301] handling current node
	I1027 19:59:26.824409       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1027 19:59:26.824443       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8119904b23c367a5d244f25c4fe2bc1cd3d35a55a65310c3653fba1207a28c6c] <==
	I1027 19:58:34.908985       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 19:58:34.908992       1 cache.go:39] Caches are synced for autoregister controller
	I1027 19:58:34.909128       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1027 19:58:34.909697       1 shared_informer.go:318] Caches are synced for configmaps
	E1027 19:58:34.910423       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","global-default","leader-election","node-high","system","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	I1027 19:58:34.911250       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1027 19:58:34.917832       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E1027 19:58:34.945902       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1027 19:58:34.948015       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1027 19:58:35.513864       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 19:58:36.837551       1 controller.go:624] quota admission added evaluator for: namespaces
	I1027 19:58:36.912437       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1027 19:58:36.945154       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 19:58:36.966358       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 19:58:36.979926       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1027 19:58:37.091426       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.20.113"}
	I1027 19:58:37.113018       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.48.208"}
	E1027 19:58:44.908667       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","leader-election","node-high","system","workload-high","workload-low","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	I1027 19:58:47.360274       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 19:58:47.384494       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1027 19:58:47.413689       1 controller.go:624] quota admission added evaluator for: endpoints
	E1027 19:58:54.909510       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","leader-election","node-high","system","workload-high","workload-low","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	E1027 19:59:04.909774       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-low","catch-all","exempt","global-default","leader-election","node-high","system","workload-high"] items=[{},{},{},{},{},{},{},{}]
	E1027 19:59:14.910731       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","global-default","leader-election","node-high","system","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	E1027 19:59:24.911660       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","global-default","leader-election","node-high","system","workload-high","workload-low","catch-all"] items=[{},{},{},{},{},{},{},{}]
	
	
	==> kube-controller-manager [8da113f89d96b45a2ba55effd9ad48b2c52db76a2494916df6764912cbea8fcf] <==
	I1027 19:58:47.431563       1 shared_informer.go:318] Caches are synced for disruption
	I1027 19:58:47.463606       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1027 19:58:47.489729       1 shared_informer.go:318] Caches are synced for resource quota
	I1027 19:58:47.493762       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-h8ggd"
	I1027 19:58:47.499590       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-5rbpv"
	I1027 19:58:47.525728       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="109.395301ms"
	I1027 19:58:47.559404       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="142.73456ms"
	I1027 19:58:47.564006       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="37.952232ms"
	I1027 19:58:47.567179       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="70.423µs"
	I1027 19:58:47.603333       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="43.81102ms"
	I1027 19:58:47.603510       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="62.177µs"
	I1027 19:58:47.603888       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="66.476µs"
	I1027 19:58:47.636492       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="52.807µs"
	I1027 19:58:47.791200       1 shared_informer.go:318] Caches are synced for garbage collector
	I1027 19:58:47.791295       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1027 19:58:47.793344       1 shared_informer.go:318] Caches are synced for garbage collector
	I1027 19:58:56.867533       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="79.481µs"
	I1027 19:58:57.893413       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="46.472µs"
	I1027 19:58:58.953954       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="108.182µs"
	I1027 19:59:04.955579       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="18.026346ms"
	I1027 19:59:04.956367       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="51.855µs"
	I1027 19:59:11.968677       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="47.228µs"
	I1027 19:59:16.029176       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="20.502552ms"
	I1027 19:59:16.029680       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="75.69µs"
	I1027 19:59:18.759559       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="46.317µs"
	
	
	==> kube-proxy [7a2d5a71f412b243de7e7e81e23b51bc4017375d2bae9648942dc2819590c31d] <==
	I1027 19:58:36.606152       1 server_others.go:69] "Using iptables proxy"
	I1027 19:58:36.621152       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1027 19:58:36.736563       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 19:58:36.749783       1 server_others.go:152] "Using iptables Proxier"
	I1027 19:58:36.749821       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1027 19:58:36.749828       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1027 19:58:36.749863       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1027 19:58:36.750121       1 server.go:846] "Version info" version="v1.28.0"
	I1027 19:58:36.750132       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:58:36.764250       1 config.go:188] "Starting service config controller"
	I1027 19:58:36.764266       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1027 19:58:36.764283       1 config.go:97] "Starting endpoint slice config controller"
	I1027 19:58:36.764287       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1027 19:58:36.767209       1 config.go:315] "Starting node config controller"
	I1027 19:58:36.767230       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1027 19:58:36.864409       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1027 19:58:36.864955       1 shared_informer.go:318] Caches are synced for service config
	I1027 19:58:36.868070       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [5f6e29c2f0799fd26d70aeb640fbc8515dabd349f18d85e616d9b44fa0a76304] <==
	I1027 19:58:32.352764       1 serving.go:348] Generated self-signed cert in-memory
	W1027 19:58:34.627095       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1027 19:58:34.627126       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1027 19:58:34.627138       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1027 19:58:34.627157       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1027 19:58:34.862143       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1027 19:58:34.865764       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:58:34.867872       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:58:34.867952       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1027 19:58:34.868530       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1027 19:58:34.869721       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1027 19:58:34.968767       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 27 19:58:47 old-k8s-version-942644 kubelet[778]: I1027 19:58:47.659967     778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0bd4a580-5e95-40f0-bbcc-10838ef4c773-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-5rbpv\" (UID: \"0bd4a580-5e95-40f0-bbcc-10838ef4c773\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-5rbpv"
	Oct 27 19:58:47 old-k8s-version-942644 kubelet[778]: I1027 19:58:47.660003     778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/744e3140-27f8-4808-9cd5-97dae649dc0c-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-h8ggd\" (UID: \"744e3140-27f8-4808-9cd5-97dae649dc0c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h8ggd"
	Oct 27 19:58:47 old-k8s-version-942644 kubelet[778]: I1027 19:58:47.660028     778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h7kw\" (UniqueName: \"kubernetes.io/projected/744e3140-27f8-4808-9cd5-97dae649dc0c-kube-api-access-7h7kw\") pod \"dashboard-metrics-scraper-5f989dc9cf-h8ggd\" (UID: \"744e3140-27f8-4808-9cd5-97dae649dc0c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h8ggd"
	Oct 27 19:58:48 old-k8s-version-942644 kubelet[778]: W1027 19:58:48.812255     778 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/10950a3c65bf121769a3f4633bb765a9cdb0464d8195fe6bebdec626b5deca5a/crio-aa2ffa49e41e8f56c01d3c9268bdb24cdc849cae94cc26956a9ddd4817734327 WatchSource:0}: Error finding container aa2ffa49e41e8f56c01d3c9268bdb24cdc849cae94cc26956a9ddd4817734327: Status 404 returned error can't find the container with id aa2ffa49e41e8f56c01d3c9268bdb24cdc849cae94cc26956a9ddd4817734327
	Oct 27 19:58:48 old-k8s-version-942644 kubelet[778]: W1027 19:58:48.864476     778 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/10950a3c65bf121769a3f4633bb765a9cdb0464d8195fe6bebdec626b5deca5a/crio-6c84e15c6561d109ad9889eb3ac9a70a013e767eb768a02e28c9508c52729dc5 WatchSource:0}: Error finding container 6c84e15c6561d109ad9889eb3ac9a70a013e767eb768a02e28c9508c52729dc5: Status 404 returned error can't find the container with id 6c84e15c6561d109ad9889eb3ac9a70a013e767eb768a02e28c9508c52729dc5
	Oct 27 19:58:56 old-k8s-version-942644 kubelet[778]: I1027 19:58:56.849965     778 scope.go:117] "RemoveContainer" containerID="c478c85c9c8138771143b132a48fcd6eb4512f8e116cfdee29cedbb4fe0b6045"
	Oct 27 19:58:57 old-k8s-version-942644 kubelet[778]: I1027 19:58:57.851987     778 scope.go:117] "RemoveContainer" containerID="c478c85c9c8138771143b132a48fcd6eb4512f8e116cfdee29cedbb4fe0b6045"
	Oct 27 19:58:57 old-k8s-version-942644 kubelet[778]: I1027 19:58:57.852283     778 scope.go:117] "RemoveContainer" containerID="1a065c2e5d2d1c32440c57f3300210cd63be6bf11ab6fd06a70652ff3395da3f"
	Oct 27 19:58:57 old-k8s-version-942644 kubelet[778]: E1027 19:58:57.855343     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-h8ggd_kubernetes-dashboard(744e3140-27f8-4808-9cd5-97dae649dc0c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h8ggd" podUID="744e3140-27f8-4808-9cd5-97dae649dc0c"
	Oct 27 19:58:58 old-k8s-version-942644 kubelet[778]: I1027 19:58:58.856091     778 scope.go:117] "RemoveContainer" containerID="1a065c2e5d2d1c32440c57f3300210cd63be6bf11ab6fd06a70652ff3395da3f"
	Oct 27 19:58:58 old-k8s-version-942644 kubelet[778]: E1027 19:58:58.856375     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-h8ggd_kubernetes-dashboard(744e3140-27f8-4808-9cd5-97dae649dc0c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h8ggd" podUID="744e3140-27f8-4808-9cd5-97dae649dc0c"
	Oct 27 19:58:59 old-k8s-version-942644 kubelet[778]: I1027 19:58:59.858212     778 scope.go:117] "RemoveContainer" containerID="1a065c2e5d2d1c32440c57f3300210cd63be6bf11ab6fd06a70652ff3395da3f"
	Oct 27 19:58:59 old-k8s-version-942644 kubelet[778]: E1027 19:58:59.858499     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-h8ggd_kubernetes-dashboard(744e3140-27f8-4808-9cd5-97dae649dc0c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h8ggd" podUID="744e3140-27f8-4808-9cd5-97dae649dc0c"
	Oct 27 19:59:06 old-k8s-version-942644 kubelet[778]: I1027 19:59:06.910620     778 scope.go:117] "RemoveContainer" containerID="499383b8d8fc1d11df4a7905d477e6e6830cc86dd0bc67d3eb588824cec2dc07"
	Oct 27 19:59:06 old-k8s-version-942644 kubelet[778]: I1027 19:59:06.932881     778 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-5rbpv" podStartSLOduration=4.234066717 podCreationTimestamp="2025-10-27 19:58:47 +0000 UTC" firstStartedPulling="2025-10-27 19:58:48.892915174 +0000 UTC m=+19.440596029" lastFinishedPulling="2025-10-27 19:59:04.591670016 +0000 UTC m=+35.139350880" observedRunningTime="2025-10-27 19:59:04.936695541 +0000 UTC m=+35.484376397" watchObservedRunningTime="2025-10-27 19:59:06.932821568 +0000 UTC m=+37.480502432"
	Oct 27 19:59:11 old-k8s-version-942644 kubelet[778]: I1027 19:59:11.663172     778 scope.go:117] "RemoveContainer" containerID="1a065c2e5d2d1c32440c57f3300210cd63be6bf11ab6fd06a70652ff3395da3f"
	Oct 27 19:59:11 old-k8s-version-942644 kubelet[778]: I1027 19:59:11.929106     778 scope.go:117] "RemoveContainer" containerID="1a065c2e5d2d1c32440c57f3300210cd63be6bf11ab6fd06a70652ff3395da3f"
	Oct 27 19:59:11 old-k8s-version-942644 kubelet[778]: I1027 19:59:11.929370     778 scope.go:117] "RemoveContainer" containerID="e1a6d4b6855d2b349dde7afdcd071acf4cd94ca737e469cfa84893b5b71d4655"
	Oct 27 19:59:11 old-k8s-version-942644 kubelet[778]: E1027 19:59:11.929644     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-h8ggd_kubernetes-dashboard(744e3140-27f8-4808-9cd5-97dae649dc0c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h8ggd" podUID="744e3140-27f8-4808-9cd5-97dae649dc0c"
	Oct 27 19:59:18 old-k8s-version-942644 kubelet[778]: I1027 19:59:18.732863     778 scope.go:117] "RemoveContainer" containerID="e1a6d4b6855d2b349dde7afdcd071acf4cd94ca737e469cfa84893b5b71d4655"
	Oct 27 19:59:18 old-k8s-version-942644 kubelet[778]: E1027 19:59:18.733623     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-h8ggd_kubernetes-dashboard(744e3140-27f8-4808-9cd5-97dae649dc0c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h8ggd" podUID="744e3140-27f8-4808-9cd5-97dae649dc0c"
	Oct 27 19:59:30 old-k8s-version-942644 kubelet[778]: I1027 19:59:30.371646     778 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 27 19:59:30 old-k8s-version-942644 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 19:59:30 old-k8s-version-942644 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 19:59:30 old-k8s-version-942644 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [41c348ac8acf87523b5ca5a1bc063fd5887c49974ca78f9ddfd69cc2af77e23d] <==
	2025/10/27 19:59:04 Using namespace: kubernetes-dashboard
	2025/10/27 19:59:04 Using in-cluster config to connect to apiserver
	2025/10/27 19:59:04 Using secret token for csrf signing
	2025/10/27 19:59:04 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/27 19:59:04 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/27 19:59:04 Successful initial request to the apiserver, version: v1.28.0
	2025/10/27 19:59:04 Generating JWE encryption key
	2025/10/27 19:59:04 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/27 19:59:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/27 19:59:05 Initializing JWE encryption key from synchronized object
	2025/10/27 19:59:05 Creating in-cluster Sidecar client
	2025/10/27 19:59:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 19:59:05 Serving insecurely on HTTP port: 9090
	2025/10/27 19:59:04 Starting overwatch
	
	
	==> storage-provisioner [499383b8d8fc1d11df4a7905d477e6e6830cc86dd0bc67d3eb588824cec2dc07] <==
	I1027 19:58:36.564059       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1027 19:59:06.566361       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [955dc230362e096be0e14119979eeb4b516307eceab1bee2309c5c10aee85887] <==
	I1027 19:59:06.953698       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1027 19:59:06.967649       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 19:59:06.967825       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1027 19:59:24.371921       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 19:59:24.372112       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-942644_147149da-62b0-4ee4-81eb-fb51b192b049!
	I1027 19:59:24.374078       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ba540a69-e48f-48b4-a3e1-f6e693f646a8", APIVersion:"v1", ResourceVersion:"670", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-942644_147149da-62b0-4ee4-81eb-fb51b192b049 became leader
	I1027 19:59:24.473639       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-942644_147149da-62b0-4ee4-81eb-fb51b192b049!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-942644 -n old-k8s-version-942644
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-942644 -n old-k8s-version-942644: exit status 2 (553.071481ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-942644 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-942644
helpers_test.go:243: (dbg) docker inspect old-k8s-version-942644:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "10950a3c65bf121769a3f4633bb765a9cdb0464d8195fe6bebdec626b5deca5a",
	        "Created": "2025-10-27T19:57:02.220286943Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 448778,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T19:58:22.815853824Z",
	            "FinishedAt": "2025-10-27T19:58:21.990485962Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/10950a3c65bf121769a3f4633bb765a9cdb0464d8195fe6bebdec626b5deca5a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/10950a3c65bf121769a3f4633bb765a9cdb0464d8195fe6bebdec626b5deca5a/hostname",
	        "HostsPath": "/var/lib/docker/containers/10950a3c65bf121769a3f4633bb765a9cdb0464d8195fe6bebdec626b5deca5a/hosts",
	        "LogPath": "/var/lib/docker/containers/10950a3c65bf121769a3f4633bb765a9cdb0464d8195fe6bebdec626b5deca5a/10950a3c65bf121769a3f4633bb765a9cdb0464d8195fe6bebdec626b5deca5a-json.log",
	        "Name": "/old-k8s-version-942644",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-942644:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-942644",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "10950a3c65bf121769a3f4633bb765a9cdb0464d8195fe6bebdec626b5deca5a",
	                "LowerDir": "/var/lib/docker/overlay2/ac29eb6ae0b9a6af31ee0c5db23452fde2f2593679f1a0a32d72317c10177f51-init/diff:/var/lib/docker/overlay2/0567218bbe05e8b59b2aef7ad82032d6716040c1f9d9d73e7de8682101079f76/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ac29eb6ae0b9a6af31ee0c5db23452fde2f2593679f1a0a32d72317c10177f51/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ac29eb6ae0b9a6af31ee0c5db23452fde2f2593679f1a0a32d72317c10177f51/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ac29eb6ae0b9a6af31ee0c5db23452fde2f2593679f1a0a32d72317c10177f51/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-942644",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-942644/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-942644",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-942644",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-942644",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f6088abc4e01f24e2b6ae491e6149a5f7e5e06a8e864997679892367c0ffea3c",
	            "SandboxKey": "/var/run/docker/netns/f6088abc4e01",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33413"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33414"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33417"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33415"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33416"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-942644": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "66:a4:45:d5:16:28",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7843814e6694ec1a5f4ec1f5c9fd29cf174989ede4bbc78e0ebce293c1be9090",
	                    "EndpointID": "39156859c4604efd2df2863c5e3925de2fea1de439a49b4a07789c5df04f2813",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-942644",
	                        "10950a3c65bf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-942644 -n old-k8s-version-942644
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-942644 -n old-k8s-version-942644: exit status 2 (489.232206ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-942644 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-942644 logs -n 25: (1.669113598s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-750423 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-750423             │ jenkins │ v1.37.0 │ 27 Oct 25 19:54 UTC │                     │
	│ ssh     │ -p cilium-750423 sudo containerd config dump                                                                                                                                                                                                  │ cilium-750423             │ jenkins │ v1.37.0 │ 27 Oct 25 19:54 UTC │                     │
	│ ssh     │ -p cilium-750423 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-750423             │ jenkins │ v1.37.0 │ 27 Oct 25 19:54 UTC │                     │
	│ ssh     │ -p cilium-750423 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-750423             │ jenkins │ v1.37.0 │ 27 Oct 25 19:54 UTC │                     │
	│ ssh     │ -p cilium-750423 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-750423             │ jenkins │ v1.37.0 │ 27 Oct 25 19:54 UTC │                     │
	│ ssh     │ -p cilium-750423 sudo crio config                                                                                                                                                                                                             │ cilium-750423             │ jenkins │ v1.37.0 │ 27 Oct 25 19:54 UTC │                     │
	│ delete  │ -p cilium-750423                                                                                                                                                                                                                              │ cilium-750423             │ jenkins │ v1.37.0 │ 27 Oct 25 19:54 UTC │ 27 Oct 25 19:54 UTC │
	│ start   │ -p force-systemd-env-105360 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-105360  │ jenkins │ v1.37.0 │ 27 Oct 25 19:54 UTC │ 27 Oct 25 19:55 UTC │
	│ delete  │ -p force-systemd-env-105360                                                                                                                                                                                                                   │ force-systemd-env-105360  │ jenkins │ v1.37.0 │ 27 Oct 25 19:55 UTC │ 27 Oct 25 19:55 UTC │
	│ start   │ -p cert-expiration-280013 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-280013    │ jenkins │ v1.37.0 │ 27 Oct 25 19:55 UTC │ 27 Oct 25 19:55 UTC │
	│ delete  │ -p kubernetes-upgrade-524430                                                                                                                                                                                                                  │ kubernetes-upgrade-524430 │ jenkins │ v1.37.0 │ 27 Oct 25 19:56 UTC │ 27 Oct 25 19:56 UTC │
	│ start   │ -p cert-options-319273 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-319273       │ jenkins │ v1.37.0 │ 27 Oct 25 19:56 UTC │ 27 Oct 25 19:56 UTC │
	│ ssh     │ cert-options-319273 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-319273       │ jenkins │ v1.37.0 │ 27 Oct 25 19:56 UTC │ 27 Oct 25 19:56 UTC │
	│ ssh     │ -p cert-options-319273 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-319273       │ jenkins │ v1.37.0 │ 27 Oct 25 19:56 UTC │ 27 Oct 25 19:56 UTC │
	│ delete  │ -p cert-options-319273                                                                                                                                                                                                                        │ cert-options-319273       │ jenkins │ v1.37.0 │ 27 Oct 25 19:56 UTC │ 27 Oct 25 19:56 UTC │
	│ start   │ -p old-k8s-version-942644 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-942644    │ jenkins │ v1.37.0 │ 27 Oct 25 19:56 UTC │ 27 Oct 25 19:57 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-942644 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-942644    │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │                     │
	│ stop    │ -p old-k8s-version-942644 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-942644    │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │ 27 Oct 25 19:58 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-942644 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-942644    │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │ 27 Oct 25 19:58 UTC │
	│ start   │ -p old-k8s-version-942644 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-942644    │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │ 27 Oct 25 19:59 UTC │
	│ start   │ -p cert-expiration-280013 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-280013    │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │ 27 Oct 25 19:58 UTC │
	│ delete  │ -p cert-expiration-280013                                                                                                                                                                                                                     │ cert-expiration-280013    │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │ 27 Oct 25 19:59 UTC │
	│ start   │ -p no-preload-300878 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-300878         │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │                     │
	│ image   │ old-k8s-version-942644 image list --format=json                                                                                                                                                                                               │ old-k8s-version-942644    │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │ 27 Oct 25 19:59 UTC │
	│ pause   │ -p old-k8s-version-942644 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-942644    │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 19:59:01
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 19:59:01.157842  452092 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:59:01.158049  452092 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:59:01.158079  452092 out.go:374] Setting ErrFile to fd 2...
	I1027 19:59:01.158101  452092 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:59:01.158445  452092 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 19:59:01.159038  452092 out.go:368] Setting JSON to false
	I1027 19:59:01.163718  452092 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9694,"bootTime":1761585448,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1027 19:59:01.163849  452092 start.go:141] virtualization:  
	I1027 19:59:01.169861  452092 out.go:179] * [no-preload-300878] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 19:59:01.173058  452092 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 19:59:01.173094  452092 notify.go:220] Checking for updates...
	I1027 19:59:01.179117  452092 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 19:59:01.182180  452092 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 19:59:01.185123  452092 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-266035/.minikube
	I1027 19:59:01.188218  452092 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 19:59:01.191191  452092 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 19:59:01.194660  452092 config.go:182] Loaded profile config "old-k8s-version-942644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1027 19:59:01.194865  452092 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 19:59:01.243211  452092 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 19:59:01.243417  452092 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:59:01.343143  452092 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-27 19:59:01.329749253 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 19:59:01.343241  452092 docker.go:318] overlay module found
	I1027 19:59:01.347639  452092 out.go:179] * Using the docker driver based on user configuration
	I1027 19:59:01.351836  452092 start.go:305] selected driver: docker
	I1027 19:59:01.351864  452092 start.go:925] validating driver "docker" against <nil>
	I1027 19:59:01.351884  452092 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 19:59:01.352827  452092 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:59:01.468281  452092 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-27 19:59:01.455404823 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 19:59:01.468445  452092 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1027 19:59:01.468691  452092 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 19:59:01.472171  452092 out.go:179] * Using Docker driver with root privileges
	I1027 19:59:01.475214  452092 cni.go:84] Creating CNI manager for ""
	I1027 19:59:01.475292  452092 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:59:01.475310  452092 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 19:59:01.475394  452092 start.go:349] cluster config:
	{Name:no-preload-300878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-300878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:59:01.478716  452092 out.go:179] * Starting "no-preload-300878" primary control-plane node in "no-preload-300878" cluster
	I1027 19:59:01.481752  452092 cache.go:123] Beginning downloading kic base image for docker with crio
	I1027 19:59:01.484862  452092 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 19:59:01.487862  452092 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:59:01.487953  452092 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 19:59:01.488012  452092 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/config.json ...
	I1027 19:59:01.488045  452092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/config.json: {Name:mkbe34231d31e2da01fa535a1b181a68e268e53e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:59:01.488269  452092 cache.go:107] acquiring lock: {Name:mk2c9b32a28909ddde1ea9e1562c451629f3a8bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:59:01.488329  452092 cache.go:115] /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1027 19:59:01.488343  452092 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 82.065µs
	I1027 19:59:01.488351  452092 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1027 19:59:01.488363  452092 cache.go:107] acquiring lock: {Name:mk41739ca1e3ab4374125f086ea6ae568ba48650 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:59:01.488436  452092 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1027 19:59:01.488626  452092 cache.go:107] acquiring lock: {Name:mk633cfcec5e23624dd56cce5b9a2941a9eb26ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:59:01.488703  452092 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 19:59:01.488802  452092 cache.go:107] acquiring lock: {Name:mk8f67f1010641520ce2aed88e36df35defaec67 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:59:01.488884  452092 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1027 19:59:01.489027  452092 cache.go:107] acquiring lock: {Name:mk5a3679f1cf078979f9b59308ac24da693653f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:59:01.489114  452092 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1027 19:59:01.489210  452092 cache.go:107] acquiring lock: {Name:mk6af7dde40e27f19a53963487980377af2c3c95 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:59:01.489278  452092 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1027 19:59:01.489380  452092 cache.go:107] acquiring lock: {Name:mk263e9fca65865b31b3432ab012737135a60a06 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:59:01.489448  452092 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1027 19:59:01.489529  452092 cache.go:107] acquiring lock: {Name:mkfced02b35956836ba86d3e97965fe21c458ea4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:59:01.489601  452092 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1027 19:59:01.492385  452092 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1027 19:59:01.492949  452092 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1027 19:59:01.493152  452092 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1027 19:59:01.493313  452092 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1027 19:59:01.493775  452092 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1027 19:59:01.493957  452092 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 19:59:01.495116  452092 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1027 19:59:01.523167  452092 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 19:59:01.523192  452092 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 19:59:01.523206  452092 cache.go:232] Successfully downloaded all kic artifacts
	I1027 19:59:01.523228  452092 start.go:360] acquireMachinesLock for no-preload-300878: {Name:mk35847aee9eb4cb8c66d589a420d0e6e5324ab7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:59:01.523347  452092 start.go:364] duration metric: took 95.554µs to acquireMachinesLock for "no-preload-300878"
	I1027 19:59:01.523380  452092 start.go:93] Provisioning new machine with config: &{Name:no-preload-300878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-300878 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 19:59:01.523444  452092 start.go:125] createHost starting for "" (driver="docker")
	W1027 19:58:57.694135  448653 pod_ready.go:104] pod "coredns-5dd5756b68-fzdkv" is not "Ready", error: <nil>
	W1027 19:58:59.698811  448653 pod_ready.go:104] pod "coredns-5dd5756b68-fzdkv" is not "Ready", error: <nil>
	W1027 19:59:02.195309  448653 pod_ready.go:104] pod "coredns-5dd5756b68-fzdkv" is not "Ready", error: <nil>
	I1027 19:59:01.536136  452092 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1027 19:59:01.536416  452092 start.go:159] libmachine.API.Create for "no-preload-300878" (driver="docker")
	I1027 19:59:01.536450  452092 client.go:168] LocalClient.Create starting
	I1027 19:59:01.536508  452092 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem
	I1027 19:59:01.536544  452092 main.go:141] libmachine: Decoding PEM data...
	I1027 19:59:01.536558  452092 main.go:141] libmachine: Parsing certificate...
	I1027 19:59:01.536611  452092 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem
	I1027 19:59:01.536626  452092 main.go:141] libmachine: Decoding PEM data...
	I1027 19:59:01.536636  452092 main.go:141] libmachine: Parsing certificate...
	I1027 19:59:01.536968  452092 cli_runner.go:164] Run: docker network inspect no-preload-300878 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1027 19:59:01.567416  452092 cli_runner.go:211] docker network inspect no-preload-300878 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1027 19:59:01.567496  452092 network_create.go:284] running [docker network inspect no-preload-300878] to gather additional debugging logs...
	I1027 19:59:01.567514  452092 cli_runner.go:164] Run: docker network inspect no-preload-300878
	W1027 19:59:01.587136  452092 cli_runner.go:211] docker network inspect no-preload-300878 returned with exit code 1
	I1027 19:59:01.587168  452092 network_create.go:287] error running [docker network inspect no-preload-300878]: docker network inspect no-preload-300878: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-300878 not found
	I1027 19:59:01.587181  452092 network_create.go:289] output of [docker network inspect no-preload-300878]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-300878 not found
	
	** /stderr **
	I1027 19:59:01.587297  452092 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 19:59:01.604978  452092 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-74ee89127400 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:aa:c7:29:bd:7a:a9} reservation:<nil>}
	I1027 19:59:01.605418  452092 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-9c57ca829ac8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:0a:24:08:53:d6:07} reservation:<nil>}
	I1027 19:59:01.605659  452092 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b7a7c45fd176 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f6:52:e6:cd:c8:a6} reservation:<nil>}
	I1027 19:59:01.605953  452092 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-7843814e6694 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:1a:ff:3b:a0:5e:3b} reservation:<nil>}
	I1027 19:59:01.606532  452092 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c5cf40}
	I1027 19:59:01.606561  452092 network_create.go:124] attempt to create docker network no-preload-300878 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1027 19:59:01.606666  452092 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-300878 no-preload-300878
	I1027 19:59:01.714348  452092 network_create.go:108] docker network no-preload-300878 192.168.85.0/24 created
	I1027 19:59:01.714375  452092 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-300878" container
	I1027 19:59:01.714537  452092 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1027 19:59:01.738333  452092 cli_runner.go:164] Run: docker volume create no-preload-300878 --label name.minikube.sigs.k8s.io=no-preload-300878 --label created_by.minikube.sigs.k8s.io=true
	I1027 19:59:01.764779  452092 oci.go:103] Successfully created a docker volume no-preload-300878
	I1027 19:59:01.764863  452092 cli_runner.go:164] Run: docker run --rm --name no-preload-300878-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-300878 --entrypoint /usr/bin/test -v no-preload-300878:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1027 19:59:01.844597  452092 cache.go:162] opening:  /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1027 19:59:01.864714  452092 cache.go:162] opening:  /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1027 19:59:01.865364  452092 cache.go:162] opening:  /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1027 19:59:01.872068  452092 cache.go:162] opening:  /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1027 19:59:01.887005  452092 cache.go:162] opening:  /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1027 19:59:01.892609  452092 cache.go:162] opening:  /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1027 19:59:01.901224  452092 cache.go:162] opening:  /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1027 19:59:01.916371  452092 cache.go:157] /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1027 19:59:01.916394  452092 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 427.18528ms
	I1027 19:59:01.916406  452092 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1027 19:59:02.198571  452092 cache.go:157] /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1027 19:59:02.198602  452092 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 709.577405ms
	I1027 19:59:02.198613  452092 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1027 19:59:02.777073  452092 cli_runner.go:217] Completed: docker run --rm --name no-preload-300878-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-300878 --entrypoint /usr/bin/test -v no-preload-300878:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (1.012153823s)
	I1027 19:59:02.777104  452092 oci.go:107] Successfully prepared a docker volume no-preload-300878
	I1027 19:59:02.777119  452092 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1027 19:59:02.777326  452092 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1027 19:59:02.777470  452092 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1027 19:59:02.972550  452092 cache.go:157] /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1027 19:59:02.972945  452092 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.483412222s
	I1027 19:59:02.972995  452092 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1027 19:59:03.003583  452092 cache.go:157] /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1027 19:59:03.003863  452092 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.515063841s
	I1027 19:59:03.003915  452092 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1027 19:59:03.024136  452092 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-300878 --name no-preload-300878 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-300878 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-300878 --network no-preload-300878 --ip 192.168.85.2 --volume no-preload-300878:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1027 19:59:03.055728  452092 cache.go:157] /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1027 19:59:03.055809  452092 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.567184429s
	I1027 19:59:03.055861  452092 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1027 19:59:03.123305  452092 cache.go:157] /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1027 19:59:03.123379  452092 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.635015531s
	I1027 19:59:03.123410  452092 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1027 19:59:03.778023  452092 cli_runner.go:164] Run: docker container inspect no-preload-300878 --format={{.State.Running}}
	I1027 19:59:03.801883  452092 cli_runner.go:164] Run: docker container inspect no-preload-300878 --format={{.State.Status}}
	I1027 19:59:03.844781  452092 cli_runner.go:164] Run: docker exec no-preload-300878 stat /var/lib/dpkg/alternatives/iptables
	I1027 19:59:03.927698  452092 oci.go:144] the created container "no-preload-300878" has a running status.
	I1027 19:59:03.927774  452092 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21801-266035/.minikube/machines/no-preload-300878/id_rsa...
	I1027 19:59:04.133198  452092 cache.go:157] /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1027 19:59:04.133289  452092 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 2.643909246s
	I1027 19:59:04.133361  452092 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1027 19:59:04.133393  452092 cache.go:87] Successfully saved all images to host disk.
	I1027 19:59:04.776875  452092 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21801-266035/.minikube/machines/no-preload-300878/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1027 19:59:04.807530  452092 cli_runner.go:164] Run: docker container inspect no-preload-300878 --format={{.State.Status}}
	I1027 19:59:04.832360  452092 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1027 19:59:04.832376  452092 kic_runner.go:114] Args: [docker exec --privileged no-preload-300878 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1027 19:59:04.881166  452092 cli_runner.go:164] Run: docker container inspect no-preload-300878 --format={{.State.Status}}
	I1027 19:59:04.911217  452092 machine.go:93] provisionDockerMachine start ...
	I1027 19:59:04.911323  452092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-300878
	I1027 19:59:04.935328  452092 main.go:141] libmachine: Using SSH client type: native
	I1027 19:59:04.935654  452092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1027 19:59:04.935665  452092 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 19:59:04.936318  452092 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60050->127.0.0.1:33418: read: connection reset by peer
	W1027 19:59:04.197347  448653 pod_ready.go:104] pod "coredns-5dd5756b68-fzdkv" is not "Ready", error: <nil>
	W1027 19:59:06.688604  448653 pod_ready.go:104] pod "coredns-5dd5756b68-fzdkv" is not "Ready", error: <nil>
	I1027 19:59:08.096273  452092 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-300878
	
	I1027 19:59:08.096349  452092 ubuntu.go:182] provisioning hostname "no-preload-300878"
	I1027 19:59:08.096471  452092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-300878
	I1027 19:59:08.118163  452092 main.go:141] libmachine: Using SSH client type: native
	I1027 19:59:08.118507  452092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1027 19:59:08.118519  452092 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-300878 && echo "no-preload-300878" | sudo tee /etc/hostname
	I1027 19:59:08.290031  452092 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-300878
	
	I1027 19:59:08.290122  452092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-300878
	I1027 19:59:08.311529  452092 main.go:141] libmachine: Using SSH client type: native
	I1027 19:59:08.311871  452092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1027 19:59:08.311892  452092 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-300878' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-300878/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-300878' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 19:59:08.463173  452092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 19:59:08.463202  452092 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21801-266035/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-266035/.minikube}
	I1027 19:59:08.463230  452092 ubuntu.go:190] setting up certificates
	I1027 19:59:08.463241  452092 provision.go:84] configureAuth start
	I1027 19:59:08.463306  452092 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-300878
	I1027 19:59:08.480197  452092 provision.go:143] copyHostCerts
	I1027 19:59:08.480259  452092 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem, removing ...
	I1027 19:59:08.480272  452092 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem
	I1027 19:59:08.480350  452092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem (1078 bytes)
	I1027 19:59:08.480457  452092 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem, removing ...
	I1027 19:59:08.480468  452092 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem
	I1027 19:59:08.480494  452092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem (1123 bytes)
	I1027 19:59:08.480550  452092 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem, removing ...
	I1027 19:59:08.480558  452092 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem
	I1027 19:59:08.480582  452092 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem (1675 bytes)
	I1027 19:59:08.480631  452092 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem org=jenkins.no-preload-300878 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-300878]
	I1027 19:59:09.185786  452092 provision.go:177] copyRemoteCerts
	I1027 19:59:09.185856  452092 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 19:59:09.185901  452092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-300878
	I1027 19:59:09.203971  452092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/no-preload-300878/id_rsa Username:docker}
	I1027 19:59:09.316689  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 19:59:09.337523  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1027 19:59:09.357306  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1027 19:59:09.376600  452092 provision.go:87] duration metric: took 913.333326ms to configureAuth
	I1027 19:59:09.376629  452092 ubuntu.go:206] setting minikube options for container-runtime
	I1027 19:59:09.376816  452092 config.go:182] Loaded profile config "no-preload-300878": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:59:09.376922  452092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-300878
	I1027 19:59:09.394762  452092 main.go:141] libmachine: Using SSH client type: native
	I1027 19:59:09.395113  452092 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1027 19:59:09.395133  452092 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 19:59:09.682560  452092 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 19:59:09.682582  452092 machine.go:96] duration metric: took 4.771346115s to provisionDockerMachine
	I1027 19:59:09.682614  452092 client.go:171] duration metric: took 8.146156918s to LocalClient.Create
	I1027 19:59:09.682628  452092 start.go:167] duration metric: took 8.146214681s to libmachine.API.Create "no-preload-300878"
	I1027 19:59:09.682635  452092 start.go:293] postStartSetup for "no-preload-300878" (driver="docker")
	I1027 19:59:09.682645  452092 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 19:59:09.682721  452092 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 19:59:09.682784  452092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-300878
	I1027 19:59:09.703935  452092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/no-preload-300878/id_rsa Username:docker}
	I1027 19:59:09.811601  452092 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 19:59:09.815314  452092 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 19:59:09.815345  452092 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 19:59:09.815356  452092 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-266035/.minikube/addons for local assets ...
	I1027 19:59:09.815417  452092 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-266035/.minikube/files for local assets ...
	I1027 19:59:09.815508  452092 filesync.go:149] local asset: /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem -> 2678802.pem in /etc/ssl/certs
	I1027 19:59:09.815612  452092 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 19:59:09.823208  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem --> /etc/ssl/certs/2678802.pem (1708 bytes)
	I1027 19:59:09.841509  452092 start.go:296] duration metric: took 158.859008ms for postStartSetup
	I1027 19:59:09.841872  452092 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-300878
	I1027 19:59:09.861063  452092 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/config.json ...
	I1027 19:59:09.861361  452092 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 19:59:09.861409  452092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-300878
	I1027 19:59:09.879329  452092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/no-preload-300878/id_rsa Username:docker}
	I1027 19:59:09.985422  452092 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 19:59:09.990223  452092 start.go:128] duration metric: took 8.466763878s to createHost
	I1027 19:59:09.990245  452092 start.go:83] releasing machines lock for "no-preload-300878", held for 8.466882955s
	I1027 19:59:09.990314  452092 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-300878
	I1027 19:59:10.018343  452092 ssh_runner.go:195] Run: cat /version.json
	I1027 19:59:10.018411  452092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-300878
	I1027 19:59:10.018721  452092 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 19:59:10.018793  452092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-300878
	I1027 19:59:10.044551  452092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/no-preload-300878/id_rsa Username:docker}
	I1027 19:59:10.060740  452092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/no-preload-300878/id_rsa Username:docker}
	I1027 19:59:10.271681  452092 ssh_runner.go:195] Run: systemctl --version
	I1027 19:59:10.278103  452092 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 19:59:10.320520  452092 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 19:59:10.324807  452092 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 19:59:10.324874  452092 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 19:59:10.356558  452092 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1027 19:59:10.356578  452092 start.go:495] detecting cgroup driver to use...
	I1027 19:59:10.356609  452092 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1027 19:59:10.356661  452092 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 19:59:10.374758  452092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 19:59:10.387568  452092 docker.go:218] disabling cri-docker service (if available) ...
	I1027 19:59:10.387661  452092 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 19:59:10.404850  452092 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 19:59:10.426750  452092 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 19:59:10.550065  452092 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 19:59:10.682872  452092 docker.go:234] disabling docker service ...
	I1027 19:59:10.683021  452092 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 19:59:10.708417  452092 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 19:59:10.722337  452092 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 19:59:10.843257  452092 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 19:59:10.970779  452092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 19:59:10.983842  452092 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 19:59:10.997119  452092 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 19:59:10.997183  452092 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:59:11.007836  452092 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 19:59:11.007986  452092 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:59:11.017966  452092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:59:11.027642  452092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:59:11.037911  452092 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 19:59:11.046380  452092 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:59:11.056640  452092 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:59:11.072042  452092 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:59:11.081581  452092 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 19:59:11.091201  452092 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 19:59:11.099360  452092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:59:11.232857  452092 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 19:59:11.361090  452092 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 19:59:11.361204  452092 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 19:59:11.365642  452092 start.go:563] Will wait 60s for crictl version
	I1027 19:59:11.365734  452092 ssh_runner.go:195] Run: which crictl
	I1027 19:59:11.369456  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 19:59:11.397277  452092 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 19:59:11.397399  452092 ssh_runner.go:195] Run: crio --version
	I1027 19:59:11.424857  452092 ssh_runner.go:195] Run: crio --version
	I1027 19:59:11.455658  452092 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1027 19:59:08.688647  448653 pod_ready.go:104] pod "coredns-5dd5756b68-fzdkv" is not "Ready", error: <nil>
	W1027 19:59:11.189311  448653 pod_ready.go:104] pod "coredns-5dd5756b68-fzdkv" is not "Ready", error: <nil>
	I1027 19:59:11.458593  452092 cli_runner.go:164] Run: docker network inspect no-preload-300878 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 19:59:11.475182  452092 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1027 19:59:11.479166  452092 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 19:59:11.489574  452092 kubeadm.go:883] updating cluster {Name:no-preload-300878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-300878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 19:59:11.489691  452092 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:59:11.489750  452092 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 19:59:11.515003  452092 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1027 19:59:11.515026  452092 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1027 19:59:11.515064  452092 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:59:11.515278  452092 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1027 19:59:11.515405  452092 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 19:59:11.515514  452092 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1027 19:59:11.515624  452092 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1027 19:59:11.515734  452092 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1027 19:59:11.515836  452092 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1027 19:59:11.515927  452092 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1027 19:59:11.517051  452092 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1027 19:59:11.517294  452092 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1027 19:59:11.517460  452092 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1027 19:59:11.517633  452092 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 19:59:11.517797  452092 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:59:11.518094  452092 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1027 19:59:11.518345  452092 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1027 19:59:11.518547  452092 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1027 19:59:11.746357  452092 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1027 19:59:11.750661  452092 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1027 19:59:11.752231  452092 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1027 19:59:11.752451  452092 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1027 19:59:11.759850  452092 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1027 19:59:11.761757  452092 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 19:59:11.763974  452092 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1027 19:59:11.842950  452092 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1027 19:59:11.843007  452092 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1027 19:59:11.843114  452092 ssh_runner.go:195] Run: which crictl
	I1027 19:59:11.861841  452092 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1027 19:59:11.861885  452092 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1027 19:59:11.861969  452092 ssh_runner.go:195] Run: which crictl
	I1027 19:59:11.892712  452092 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1027 19:59:11.892804  452092 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1027 19:59:11.892881  452092 ssh_runner.go:195] Run: which crictl
	I1027 19:59:11.892987  452092 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1027 19:59:11.893063  452092 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1027 19:59:11.893119  452092 ssh_runner.go:195] Run: which crictl
	I1027 19:59:11.934336  452092 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1027 19:59:11.934424  452092 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1027 19:59:11.934501  452092 ssh_runner.go:195] Run: which crictl
	I1027 19:59:11.934631  452092 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1027 19:59:11.934790  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1027 19:59:11.934797  452092 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 19:59:11.934898  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1027 19:59:11.934940  452092 ssh_runner.go:195] Run: which crictl
	I1027 19:59:11.934723  452092 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1027 19:59:11.935005  452092 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1027 19:59:11.935047  452092 ssh_runner.go:195] Run: which crictl
	I1027 19:59:11.935093  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1027 19:59:11.935068  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1027 19:59:12.027921  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1027 19:59:12.028020  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 19:59:12.028070  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1027 19:59:12.028122  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1027 19:59:12.028180  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1027 19:59:12.043904  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1027 19:59:12.043998  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1027 19:59:12.130737  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1027 19:59:12.130816  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1027 19:59:12.130899  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 19:59:12.130972  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1027 19:59:12.131057  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1027 19:59:12.172556  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1027 19:59:12.172738  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1027 19:59:12.267507  452092 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1027 19:59:12.267594  452092 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1027 19:59:12.267638  452092 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1027 19:59:12.267682  452092 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1027 19:59:12.267755  452092 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1027 19:59:12.267798  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 19:59:12.267855  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1027 19:59:12.267607  452092 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1027 19:59:12.267975  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1027 19:59:12.268020  452092 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1027 19:59:12.268095  452092 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1027 19:59:12.339664  452092 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1027 19:59:12.339708  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1027 19:59:12.339759  452092 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1027 19:59:12.339834  452092 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1027 19:59:12.339918  452092 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1027 19:59:12.339970  452092 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1027 19:59:12.340027  452092 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1027 19:59:12.340045  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1027 19:59:12.339783  452092 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1027 19:59:12.340068  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1027 19:59:12.339983  452092 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1027 19:59:12.339812  452092 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1027 19:59:12.340143  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1027 19:59:12.340151  452092 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1027 19:59:12.394369  452092 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1027 19:59:12.394456  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1027 19:59:12.394555  452092 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1027 19:59:12.394606  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1027 19:59:12.394685  452092 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1027 19:59:12.394721  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1027 19:59:12.422348  452092 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1027 19:59:12.422468  452092 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	W1027 19:59:12.772849  452092 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1027 19:59:12.773090  452092 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:59:12.805491  452092 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1027 19:59:12.946045  452092 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1027 19:59:12.946151  452092 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1027 19:59:12.997829  452092 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1027 19:59:12.997897  452092 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:59:12.997971  452092 ssh_runner.go:195] Run: which crictl
	I1027 19:59:14.738971  452092 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.79279234s)
	I1027 19:59:14.739026  452092 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1027 19:59:14.739043  452092 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1027 19:59:14.739092  452092 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1027 19:59:14.739159  452092 ssh_runner.go:235] Completed: which crictl: (1.741173521s)
	I1027 19:59:14.739186  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:59:16.128648  452092 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.38943922s)
	I1027 19:59:16.128720  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:59:16.128792  452092 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.389685671s)
	I1027 19:59:16.128803  452092 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1027 19:59:16.128821  452092 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1027 19:59:16.128842  452092 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	W1027 19:59:13.689245  448653 pod_ready.go:104] pod "coredns-5dd5756b68-fzdkv" is not "Ready", error: <nil>
	I1027 19:59:16.188527  448653 pod_ready.go:94] pod "coredns-5dd5756b68-fzdkv" is "Ready"
	I1027 19:59:16.188549  448653 pod_ready.go:86] duration metric: took 39.006015866s for pod "coredns-5dd5756b68-fzdkv" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:59:16.192158  448653 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-942644" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:59:16.197651  448653 pod_ready.go:94] pod "etcd-old-k8s-version-942644" is "Ready"
	I1027 19:59:16.197674  448653 pod_ready.go:86] duration metric: took 5.493048ms for pod "etcd-old-k8s-version-942644" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:59:16.202449  448653 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-942644" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:59:16.208197  448653 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-942644" is "Ready"
	I1027 19:59:16.208221  448653 pod_ready.go:86] duration metric: took 5.75041ms for pod "kube-apiserver-old-k8s-version-942644" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:59:16.211605  448653 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-942644" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:59:16.386598  448653 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-942644" is "Ready"
	I1027 19:59:16.386674  448653 pod_ready.go:86] duration metric: took 174.996564ms for pod "kube-controller-manager-old-k8s-version-942644" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:59:16.586901  448653 pod_ready.go:83] waiting for pod "kube-proxy-nbdp5" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:59:16.985829  448653 pod_ready.go:94] pod "kube-proxy-nbdp5" is "Ready"
	I1027 19:59:16.985905  448653 pod_ready.go:86] duration metric: took 398.980669ms for pod "kube-proxy-nbdp5" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:59:17.186968  448653 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-942644" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:59:17.586354  448653 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-942644" is "Ready"
	I1027 19:59:17.586384  448653 pod_ready.go:86] duration metric: took 399.371247ms for pod "kube-scheduler-old-k8s-version-942644" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:59:17.586397  448653 pod_ready.go:40] duration metric: took 40.40843224s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 19:59:17.650879  448653 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1027 19:59:17.654505  448653 out.go:203] 
	W1027 19:59:17.657531  448653 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1027 19:59:17.660492  448653 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1027 19:59:17.663582  448653 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-942644" cluster and "default" namespace by default
	I1027 19:59:18.049845  452092 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.920982266s)
	I1027 19:59:18.049871  452092 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1027 19:59:18.049889  452092 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1027 19:59:18.049937  452092 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1027 19:59:18.050005  452092 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.921273891s)
	I1027 19:59:18.050046  452092 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:59:19.276461  452092 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.226390115s)
	I1027 19:59:19.276501  452092 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1027 19:59:19.276587  452092 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1027 19:59:19.276664  452092 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.226709827s)
	I1027 19:59:19.276698  452092 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1027 19:59:19.276724  452092 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1027 19:59:19.276784  452092 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1027 19:59:20.646426  452092 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.369613384s)
	I1027 19:59:20.646455  452092 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1027 19:59:20.646472  452092 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1027 19:59:20.646531  452092 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1027 19:59:20.646608  452092 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.370010525s)
	I1027 19:59:20.646629  452092 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1027 19:59:20.646645  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1027 19:59:25.103137  452092 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (4.456579429s)
	I1027 19:59:25.103216  452092 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1027 19:59:25.103256  452092 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1027 19:59:25.103333  452092 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1027 19:59:25.761149  452092 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1027 19:59:25.761188  452092 cache_images.go:124] Successfully loaded all cached images
	I1027 19:59:25.761195  452092 cache_images.go:93] duration metric: took 14.2461558s to LoadCachedImages
	I1027 19:59:25.761210  452092 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1027 19:59:25.761307  452092 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-300878 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-300878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 19:59:25.761414  452092 ssh_runner.go:195] Run: crio config
	I1027 19:59:25.820025  452092 cni.go:84] Creating CNI manager for ""
	I1027 19:59:25.820046  452092 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:59:25.820066  452092 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 19:59:25.820091  452092 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-300878 NodeName:no-preload-300878 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 19:59:25.820214  452092 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-300878"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 19:59:25.820283  452092 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 19:59:25.828701  452092 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1027 19:59:25.828835  452092 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1027 19:59:25.836143  452092 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1027 19:59:25.836302  452092 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1027 19:59:25.836748  452092 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21801-266035/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1027 19:59:25.837297  452092 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21801-266035/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1027 19:59:25.839777  452092 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1027 19:59:25.839805  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1027 19:59:26.646029  452092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:59:26.679663  452092 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1027 19:59:26.685163  452092 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1027 19:59:26.685267  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1027 19:59:26.791106  452092 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1027 19:59:26.809790  452092 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1027 19:59:26.809827  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1027 19:59:27.394438  452092 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 19:59:27.402932  452092 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1027 19:59:27.418350  452092 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 19:59:27.433554  452092 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1027 19:59:27.447982  452092 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1027 19:59:27.452570  452092 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 19:59:27.464039  452092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:59:27.588155  452092 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:59:27.604442  452092 certs.go:69] Setting up /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878 for IP: 192.168.85.2
	I1027 19:59:27.604460  452092 certs.go:195] generating shared ca certs ...
	I1027 19:59:27.604476  452092 certs.go:227] acquiring lock for ca certs: {Name:mk172548fc54811b51040cc0201d01eb4b3dd19b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:59:27.604624  452092 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21801-266035/.minikube/ca.key
	I1027 19:59:27.604664  452092 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.key
	I1027 19:59:27.604671  452092 certs.go:257] generating profile certs ...
	I1027 19:59:27.604727  452092 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/client.key
	I1027 19:59:27.604738  452092 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/client.crt with IP's: []
	I1027 19:59:29.019152  452092 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/client.crt ...
	I1027 19:59:29.019181  452092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/client.crt: {Name:mkbb5ebfa77eba8c67f308cb4fbd6c17f1555ba1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:59:29.019385  452092 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/client.key ...
	I1027 19:59:29.019399  452092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/client.key: {Name:mk6f90b47f450aad83a391991d3734dd2474ddaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:59:29.019505  452092 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/apiserver.key.f5d283a0
	I1027 19:59:29.019522  452092 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/apiserver.crt.f5d283a0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1027 19:59:30.042864  452092 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/apiserver.crt.f5d283a0 ...
	I1027 19:59:30.042952  452092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/apiserver.crt.f5d283a0: {Name:mk2d521fac2a1bf164266b65da65cfe94463b8b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:59:30.043273  452092 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/apiserver.key.f5d283a0 ...
	I1027 19:59:30.043339  452092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/apiserver.key.f5d283a0: {Name:mk304e068571c5dde5ff33b96649ff357501c5c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:59:30.043502  452092 certs.go:382] copying /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/apiserver.crt.f5d283a0 -> /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/apiserver.crt
	I1027 19:59:30.043653  452092 certs.go:386] copying /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/apiserver.key.f5d283a0 -> /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/apiserver.key
	I1027 19:59:30.043909  452092 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/proxy-client.key
	I1027 19:59:30.043965  452092 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/proxy-client.crt with IP's: []
	I1027 19:59:31.024583  452092 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/proxy-client.crt ...
	I1027 19:59:31.024664  452092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/proxy-client.crt: {Name:mk9a33b4b39534b8dc19d62ac784812003559940 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:59:31.024899  452092 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/proxy-client.key ...
	I1027 19:59:31.024942  452092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/proxy-client.key: {Name:mk993ba6b494e0fb2873af8eccf7c0ffdca1f760 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:59:31.025188  452092 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880.pem (1338 bytes)
	W1027 19:59:31.025259  452092 certs.go:480] ignoring /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880_empty.pem, impossibly tiny 0 bytes
	I1027 19:59:31.025286  452092 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 19:59:31.025349  452092 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem (1078 bytes)
	I1027 19:59:31.025406  452092 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem (1123 bytes)
	I1027 19:59:31.025452  452092 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem (1675 bytes)
	I1027 19:59:31.025536  452092 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem (1708 bytes)
	I1027 19:59:31.026125  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 19:59:31.044290  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 19:59:31.063208  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 19:59:31.085517  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 19:59:31.108677  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1027 19:59:31.141592  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 19:59:31.166046  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 19:59:31.187561  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1027 19:59:31.209137  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880.pem --> /usr/share/ca-certificates/267880.pem (1338 bytes)
	I1027 19:59:31.232183  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem --> /usr/share/ca-certificates/2678802.pem (1708 bytes)
	I1027 19:59:31.254125  452092 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 19:59:31.276202  452092 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 19:59:31.291186  452092 ssh_runner.go:195] Run: openssl version
	I1027 19:59:31.300874  452092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/267880.pem && ln -fs /usr/share/ca-certificates/267880.pem /etc/ssl/certs/267880.pem"
	I1027 19:59:31.317061  452092 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/267880.pem
	I1027 19:59:31.321692  452092 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 19:04 /usr/share/ca-certificates/267880.pem
	I1027 19:59:31.321758  452092 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/267880.pem
	I1027 19:59:31.367739  452092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/267880.pem /etc/ssl/certs/51391683.0"
	I1027 19:59:31.376174  452092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2678802.pem && ln -fs /usr/share/ca-certificates/2678802.pem /etc/ssl/certs/2678802.pem"
	I1027 19:59:31.384685  452092 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2678802.pem
	I1027 19:59:31.388662  452092 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 19:04 /usr/share/ca-certificates/2678802.pem
	I1027 19:59:31.388750  452092 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2678802.pem
	I1027 19:59:31.429817  452092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2678802.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 19:59:31.438481  452092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 19:59:31.447109  452092 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:59:31.451417  452092 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:59:31.451505  452092 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:59:31.492735  452092 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 19:59:31.501840  452092 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 19:59:31.505851  452092 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 19:59:31.505905  452092 kubeadm.go:400] StartCluster: {Name:no-preload-300878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-300878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:59:31.505990  452092 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 19:59:31.506052  452092 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 19:59:31.536539  452092 cri.go:89] found id: ""
	I1027 19:59:31.536620  452092 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 19:59:31.545247  452092 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 19:59:31.553148  452092 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1027 19:59:31.553254  452092 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 19:59:31.561851  452092 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 19:59:31.561871  452092 kubeadm.go:157] found existing configuration files:
	
	I1027 19:59:31.561924  452092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 19:59:31.570320  452092 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 19:59:31.570380  452092 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 19:59:31.578264  452092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 19:59:31.585781  452092 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 19:59:31.585845  452092 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 19:59:31.593154  452092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 19:59:31.601048  452092 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 19:59:31.601165  452092 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 19:59:31.608485  452092 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 19:59:31.616099  452092 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 19:59:31.616211  452092 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 19:59:31.623930  452092 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 19:59:31.662468  452092 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1027 19:59:31.662637  452092 kubeadm.go:318] [preflight] Running pre-flight checks
	I1027 19:59:31.688962  452092 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1027 19:59:31.689052  452092 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1027 19:59:31.689097  452092 kubeadm.go:318] OS: Linux
	I1027 19:59:31.689153  452092 kubeadm.go:318] CGROUPS_CPU: enabled
	I1027 19:59:31.689212  452092 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1027 19:59:31.689269  452092 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1027 19:59:31.689327  452092 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1027 19:59:31.689385  452092 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1027 19:59:31.689443  452092 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1027 19:59:31.689497  452092 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1027 19:59:31.689555  452092 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1027 19:59:31.689611  452092 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1027 19:59:31.790015  452092 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 19:59:31.790138  452092 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 19:59:31.790238  452092 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 19:59:31.812792  452092 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	
	==> CRI-O <==
	Oct 27 19:59:11 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:11.667367405Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:59:11 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:11.675715986Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:59:11 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:11.676261324Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:59:11 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:11.692786963Z" level=info msg="Created container e1a6d4b6855d2b349dde7afdcd071acf4cd94ca737e469cfa84893b5b71d4655: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h8ggd/dashboard-metrics-scraper" id=d9de6d59-fe0a-4efb-8c4c-997044c4ad9c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:59:11 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:11.693841751Z" level=info msg="Starting container: e1a6d4b6855d2b349dde7afdcd071acf4cd94ca737e469cfa84893b5b71d4655" id=bdf409c2-91ae-418a-a2cc-7cd851470504 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:59:11 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:11.695865637Z" level=info msg="Started container" PID=1633 containerID=e1a6d4b6855d2b349dde7afdcd071acf4cd94ca737e469cfa84893b5b71d4655 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h8ggd/dashboard-metrics-scraper id=bdf409c2-91ae-418a-a2cc-7cd851470504 name=/runtime.v1.RuntimeService/StartContainer sandboxID=aa2ffa49e41e8f56c01d3c9268bdb24cdc849cae94cc26956a9ddd4817734327
	Oct 27 19:59:11 old-k8s-version-942644 conmon[1631]: conmon e1a6d4b6855d2b349dde <ninfo>: container 1633 exited with status 1
	Oct 27 19:59:11 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:11.931384857Z" level=info msg="Removing container: 1a065c2e5d2d1c32440c57f3300210cd63be6bf11ab6fd06a70652ff3395da3f" id=6642bda7-840d-424f-a54a-197522887c59 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 19:59:11 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:11.942836603Z" level=info msg="Error loading conmon cgroup of container 1a065c2e5d2d1c32440c57f3300210cd63be6bf11ab6fd06a70652ff3395da3f: cgroup deleted" id=6642bda7-840d-424f-a54a-197522887c59 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 19:59:11 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:11.949389017Z" level=info msg="Removed container 1a065c2e5d2d1c32440c57f3300210cd63be6bf11ab6fd06a70652ff3395da3f: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h8ggd/dashboard-metrics-scraper" id=6642bda7-840d-424f-a54a-197522887c59 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 19:59:16 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:16.816464825Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 19:59:16 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:16.820881874Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 19:59:16 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:16.820914562Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 19:59:16 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:16.820935197Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 19:59:16 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:16.823996633Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 19:59:16 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:16.824033727Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 19:59:16 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:16.824053107Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 19:59:16 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:16.826836569Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 19:59:16 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:16.826871407Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 19:59:16 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:16.826890082Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 19:59:16 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:16.829562983Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 19:59:16 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:16.829597156Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 19:59:16 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:16.829616306Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 19:59:16 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:16.832221911Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 19:59:16 old-k8s-version-942644 crio[652]: time="2025-10-27T19:59:16.832253688Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	e1a6d4b6855d2       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago       Exited              dashboard-metrics-scraper   2                   aa2ffa49e41e8       dashboard-metrics-scraper-5f989dc9cf-h8ggd       kubernetes-dashboard
	955dc230362e0       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           29 seconds ago       Running             storage-provisioner         2                   9be867f4f292d       storage-provisioner                              kube-system
	41c348ac8acf8       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   31 seconds ago       Running             kubernetes-dashboard        0                   6c84e15c6561d       kubernetes-dashboard-8694d4445c-5rbpv            kubernetes-dashboard
	489f480edd095       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           59 seconds ago       Running             coredns                     1                   d37801f0cbd39       coredns-5dd5756b68-fzdkv                         kube-system
	4d661e9051c59       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           59 seconds ago       Running             kindnet-cni                 1                   8cdcf0ecc0ff4       kindnet-845vr                                    kube-system
	499383b8d8fc1       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           59 seconds ago       Exited              storage-provisioner         1                   9be867f4f292d       storage-provisioner                              kube-system
	7a2d5a71f412b       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           59 seconds ago       Running             kube-proxy                  1                   2002efcc324e4       kube-proxy-nbdp5                                 kube-system
	df25b11932b65       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           About a minute ago   Running             busybox                     1                   3eb11d6747fc5       busybox                                          default
	4191fbc773c78       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   0189770b5a579       etcd-old-k8s-version-942644                      kube-system
	8da113f89d96b       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   53a4859c0edac       kube-controller-manager-old-k8s-version-942644   kube-system
	8119904b23c36       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   bf54005ea42bb       kube-apiserver-old-k8s-version-942644            kube-system
	5f6e29c2f0799       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   d34e5f736a425       kube-scheduler-old-k8s-version-942644            kube-system
	
	
	==> coredns [489f480edd095f2ec8dafa5787de84eb1a9ed7d0820e496497cd82557cb54df5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:45562 - 13555 "HINFO IN 5364419795793254679.3340023868961547350. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017404448s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-942644
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-942644
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=old-k8s-version-942644
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T19_57_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 19:57:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-942644
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 19:59:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 19:59:05 +0000   Mon, 27 Oct 2025 19:57:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 19:59:05 +0000   Mon, 27 Oct 2025 19:57:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 19:59:05 +0000   Mon, 27 Oct 2025 19:57:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 19:59:05 +0000   Mon, 27 Oct 2025 19:57:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-942644
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                effa8846-d81b-42a0-8993-bf5b12f2eae0
	  Boot ID:                    23ea05b4-8203-4fb9-a84a-5deb4b091cbb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 coredns-5dd5756b68-fzdkv                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     115s
	  kube-system                 etcd-old-k8s-version-942644                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m11s
	  kube-system                 kindnet-845vr                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      115s
	  kube-system                 kube-apiserver-old-k8s-version-942644             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 kube-controller-manager-old-k8s-version-942644    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 kube-proxy-nbdp5                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-scheduler-old-k8s-version-942644             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-h8ggd        0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-5rbpv             0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 111s                   kube-proxy       
	  Normal  Starting                 59s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  2m16s (x8 over 2m16s)  kubelet          Node old-k8s-version-942644 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m16s (x8 over 2m16s)  kubelet          Node old-k8s-version-942644 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m16s (x8 over 2m16s)  kubelet          Node old-k8s-version-942644 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m8s                   kubelet          Node old-k8s-version-942644 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m8s                   kubelet          Node old-k8s-version-942644 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m8s                   kubelet          Node old-k8s-version-942644 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m8s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           115s                   node-controller  Node old-k8s-version-942644 event: Registered Node old-k8s-version-942644 in Controller
	  Normal  NodeReady                101s                   kubelet          Node old-k8s-version-942644 status is now: NodeReady
	  Normal  Starting                 67s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  67s (x8 over 67s)      kubelet          Node old-k8s-version-942644 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    67s (x8 over 67s)      kubelet          Node old-k8s-version-942644 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     67s (x8 over 67s)      kubelet          Node old-k8s-version-942644 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           49s                    node-controller  Node old-k8s-version-942644 event: Registered Node old-k8s-version-942644 in Controller
	
	
	==> dmesg <==
	[Oct27 19:34] overlayfs: idmapped layers are currently not supported
	[ +33.986700] overlayfs: idmapped layers are currently not supported
	[Oct27 19:36] overlayfs: idmapped layers are currently not supported
	[Oct27 19:37] overlayfs: idmapped layers are currently not supported
	[Oct27 19:38] overlayfs: idmapped layers are currently not supported
	[Oct27 19:39] overlayfs: idmapped layers are currently not supported
	[Oct27 19:40] overlayfs: idmapped layers are currently not supported
	[  +7.015891] overlayfs: idmapped layers are currently not supported
	[Oct27 19:41] overlayfs: idmapped layers are currently not supported
	[ +28.455216] overlayfs: idmapped layers are currently not supported
	[Oct27 19:42] overlayfs: idmapped layers are currently not supported
	[ +26.490174] overlayfs: idmapped layers are currently not supported
	[Oct27 19:44] overlayfs: idmapped layers are currently not supported
	[Oct27 19:45] overlayfs: idmapped layers are currently not supported
	[Oct27 19:47] overlayfs: idmapped layers are currently not supported
	[Oct27 19:49] overlayfs: idmapped layers are currently not supported
	[ +31.410335] overlayfs: idmapped layers are currently not supported
	[Oct27 19:51] overlayfs: idmapped layers are currently not supported
	[Oct27 19:53] overlayfs: idmapped layers are currently not supported
	[Oct27 19:54] overlayfs: idmapped layers are currently not supported
	[Oct27 19:55] overlayfs: idmapped layers are currently not supported
	[Oct27 19:56] overlayfs: idmapped layers are currently not supported
	[Oct27 19:57] overlayfs: idmapped layers are currently not supported
	[Oct27 19:58] overlayfs: idmapped layers are currently not supported
	[Oct27 19:59] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [4191fbc773c7860df74d9c43a79eaa2b2c2fedf87a834522d6484976aa6a7b38] <==
	{"level":"info","ts":"2025-10-27T19:58:30.632425Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-27T19:58:30.632559Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-27T19:58:30.632988Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-27T19:58:30.651152Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-27T19:58:30.651363Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-27T19:58:30.651425Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-27T19:58:30.654758Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-27T19:58:30.662867Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-27T19:58:30.662969Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-27T19:58:30.658694Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-27T19:58:30.675699Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-27T19:58:31.766802Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-27T19:58:31.766911Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-27T19:58:31.76697Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-27T19:58:31.767022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-10-27T19:58:31.767053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-27T19:58:31.767089Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-10-27T19:58:31.767131Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-27T19:58:31.776392Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-942644 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-27T19:58:31.776626Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-27T19:58:31.776747Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-27T19:58:31.777822Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-27T19:58:31.783085Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-27T19:58:31.810235Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-27T19:58:31.810345Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:59:36 up  2:42,  0 user,  load average: 3.70, 3.06, 2.59
	Linux old-k8s-version-942644 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4d661e9051c59fc87cc15877e6a8f433cac6f4f3e16430714cea6c010b259343] <==
	I1027 19:58:36.541333       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 19:58:36.542018       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1027 19:58:36.542183       1 main.go:148] setting mtu 1500 for CNI 
	I1027 19:58:36.542197       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 19:58:36.542207       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T19:58:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 19:58:36.817055       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 19:58:36.817071       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 19:58:36.817080       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 19:58:36.817179       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1027 19:59:06.816945       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1027 19:59:06.816947       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1027 19:59:06.817094       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1027 19:59:06.817170       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1027 19:59:08.318164       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 19:59:08.318197       1 metrics.go:72] Registering metrics
	I1027 19:59:08.318250       1 controller.go:711] "Syncing nftables rules"
	I1027 19:59:16.816122       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1027 19:59:16.816238       1 main.go:301] handling current node
	I1027 19:59:26.824409       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1027 19:59:26.824443       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8119904b23c367a5d244f25c4fe2bc1cd3d35a55a65310c3653fba1207a28c6c] <==
	I1027 19:58:34.908992       1 cache.go:39] Caches are synced for autoregister controller
	I1027 19:58:34.909128       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1027 19:58:34.909697       1 shared_informer.go:318] Caches are synced for configmaps
	E1027 19:58:34.910423       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","global-default","leader-election","node-high","system","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	I1027 19:58:34.911250       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1027 19:58:34.917832       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E1027 19:58:34.945902       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1027 19:58:34.948015       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1027 19:58:35.513864       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 19:58:36.837551       1 controller.go:624] quota admission added evaluator for: namespaces
	I1027 19:58:36.912437       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1027 19:58:36.945154       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 19:58:36.966358       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 19:58:36.979926       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1027 19:58:37.091426       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.20.113"}
	I1027 19:58:37.113018       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.48.208"}
	E1027 19:58:44.908667       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","leader-election","node-high","system","workload-high","workload-low","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	I1027 19:58:47.360274       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 19:58:47.384494       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1027 19:58:47.413689       1 controller.go:624] quota admission added evaluator for: endpoints
	E1027 19:58:54.909510       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","leader-election","node-high","system","workload-high","workload-low","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	E1027 19:59:04.909774       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-low","catch-all","exempt","global-default","leader-election","node-high","system","workload-high"] items=[{},{},{},{},{},{},{},{}]
	E1027 19:59:14.910731       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","global-default","leader-election","node-high","system","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	E1027 19:59:24.911660       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","global-default","leader-election","node-high","system","workload-high","workload-low","catch-all"] items=[{},{},{},{},{},{},{},{}]
	E1027 19:59:34.912133       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","global-default","leader-election","node-high","system","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	
	
	==> kube-controller-manager [8da113f89d96b45a2ba55effd9ad48b2c52db76a2494916df6764912cbea8fcf] <==
	I1027 19:58:47.431563       1 shared_informer.go:318] Caches are synced for disruption
	I1027 19:58:47.463606       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1027 19:58:47.489729       1 shared_informer.go:318] Caches are synced for resource quota
	I1027 19:58:47.493762       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-h8ggd"
	I1027 19:58:47.499590       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-5rbpv"
	I1027 19:58:47.525728       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="109.395301ms"
	I1027 19:58:47.559404       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="142.73456ms"
	I1027 19:58:47.564006       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="37.952232ms"
	I1027 19:58:47.567179       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="70.423µs"
	I1027 19:58:47.603333       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="43.81102ms"
	I1027 19:58:47.603510       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="62.177µs"
	I1027 19:58:47.603888       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="66.476µs"
	I1027 19:58:47.636492       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="52.807µs"
	I1027 19:58:47.791200       1 shared_informer.go:318] Caches are synced for garbage collector
	I1027 19:58:47.791295       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1027 19:58:47.793344       1 shared_informer.go:318] Caches are synced for garbage collector
	I1027 19:58:56.867533       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="79.481µs"
	I1027 19:58:57.893413       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="46.472µs"
	I1027 19:58:58.953954       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="108.182µs"
	I1027 19:59:04.955579       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="18.026346ms"
	I1027 19:59:04.956367       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="51.855µs"
	I1027 19:59:11.968677       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="47.228µs"
	I1027 19:59:16.029176       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="20.502552ms"
	I1027 19:59:16.029680       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="75.69µs"
	I1027 19:59:18.759559       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="46.317µs"
	
	
	==> kube-proxy [7a2d5a71f412b243de7e7e81e23b51bc4017375d2bae9648942dc2819590c31d] <==
	I1027 19:58:36.606152       1 server_others.go:69] "Using iptables proxy"
	I1027 19:58:36.621152       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1027 19:58:36.736563       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 19:58:36.749783       1 server_others.go:152] "Using iptables Proxier"
	I1027 19:58:36.749821       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1027 19:58:36.749828       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1027 19:58:36.749863       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1027 19:58:36.750121       1 server.go:846] "Version info" version="v1.28.0"
	I1027 19:58:36.750132       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:58:36.764250       1 config.go:188] "Starting service config controller"
	I1027 19:58:36.764266       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1027 19:58:36.764283       1 config.go:97] "Starting endpoint slice config controller"
	I1027 19:58:36.764287       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1027 19:58:36.767209       1 config.go:315] "Starting node config controller"
	I1027 19:58:36.767230       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1027 19:58:36.864409       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1027 19:58:36.864955       1 shared_informer.go:318] Caches are synced for service config
	I1027 19:58:36.868070       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [5f6e29c2f0799fd26d70aeb640fbc8515dabd349f18d85e616d9b44fa0a76304] <==
	I1027 19:58:32.352764       1 serving.go:348] Generated self-signed cert in-memory
	W1027 19:58:34.627095       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1027 19:58:34.627126       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1027 19:58:34.627138       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1027 19:58:34.627157       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1027 19:58:34.862143       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1027 19:58:34.865764       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:58:34.867872       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:58:34.867952       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1027 19:58:34.868530       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1027 19:58:34.869721       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1027 19:58:34.968767       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 27 19:58:47 old-k8s-version-942644 kubelet[778]: I1027 19:58:47.659967     778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0bd4a580-5e95-40f0-bbcc-10838ef4c773-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-5rbpv\" (UID: \"0bd4a580-5e95-40f0-bbcc-10838ef4c773\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-5rbpv"
	Oct 27 19:58:47 old-k8s-version-942644 kubelet[778]: I1027 19:58:47.660003     778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/744e3140-27f8-4808-9cd5-97dae649dc0c-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-h8ggd\" (UID: \"744e3140-27f8-4808-9cd5-97dae649dc0c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h8ggd"
	Oct 27 19:58:47 old-k8s-version-942644 kubelet[778]: I1027 19:58:47.660028     778 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h7kw\" (UniqueName: \"kubernetes.io/projected/744e3140-27f8-4808-9cd5-97dae649dc0c-kube-api-access-7h7kw\") pod \"dashboard-metrics-scraper-5f989dc9cf-h8ggd\" (UID: \"744e3140-27f8-4808-9cd5-97dae649dc0c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h8ggd"
	Oct 27 19:58:48 old-k8s-version-942644 kubelet[778]: W1027 19:58:48.812255     778 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/10950a3c65bf121769a3f4633bb765a9cdb0464d8195fe6bebdec626b5deca5a/crio-aa2ffa49e41e8f56c01d3c9268bdb24cdc849cae94cc26956a9ddd4817734327 WatchSource:0}: Error finding container aa2ffa49e41e8f56c01d3c9268bdb24cdc849cae94cc26956a9ddd4817734327: Status 404 returned error can't find the container with id aa2ffa49e41e8f56c01d3c9268bdb24cdc849cae94cc26956a9ddd4817734327
	Oct 27 19:58:48 old-k8s-version-942644 kubelet[778]: W1027 19:58:48.864476     778 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/10950a3c65bf121769a3f4633bb765a9cdb0464d8195fe6bebdec626b5deca5a/crio-6c84e15c6561d109ad9889eb3ac9a70a013e767eb768a02e28c9508c52729dc5 WatchSource:0}: Error finding container 6c84e15c6561d109ad9889eb3ac9a70a013e767eb768a02e28c9508c52729dc5: Status 404 returned error can't find the container with id 6c84e15c6561d109ad9889eb3ac9a70a013e767eb768a02e28c9508c52729dc5
	Oct 27 19:58:56 old-k8s-version-942644 kubelet[778]: I1027 19:58:56.849965     778 scope.go:117] "RemoveContainer" containerID="c478c85c9c8138771143b132a48fcd6eb4512f8e116cfdee29cedbb4fe0b6045"
	Oct 27 19:58:57 old-k8s-version-942644 kubelet[778]: I1027 19:58:57.851987     778 scope.go:117] "RemoveContainer" containerID="c478c85c9c8138771143b132a48fcd6eb4512f8e116cfdee29cedbb4fe0b6045"
	Oct 27 19:58:57 old-k8s-version-942644 kubelet[778]: I1027 19:58:57.852283     778 scope.go:117] "RemoveContainer" containerID="1a065c2e5d2d1c32440c57f3300210cd63be6bf11ab6fd06a70652ff3395da3f"
	Oct 27 19:58:57 old-k8s-version-942644 kubelet[778]: E1027 19:58:57.855343     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-h8ggd_kubernetes-dashboard(744e3140-27f8-4808-9cd5-97dae649dc0c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h8ggd" podUID="744e3140-27f8-4808-9cd5-97dae649dc0c"
	Oct 27 19:58:58 old-k8s-version-942644 kubelet[778]: I1027 19:58:58.856091     778 scope.go:117] "RemoveContainer" containerID="1a065c2e5d2d1c32440c57f3300210cd63be6bf11ab6fd06a70652ff3395da3f"
	Oct 27 19:58:58 old-k8s-version-942644 kubelet[778]: E1027 19:58:58.856375     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-h8ggd_kubernetes-dashboard(744e3140-27f8-4808-9cd5-97dae649dc0c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h8ggd" podUID="744e3140-27f8-4808-9cd5-97dae649dc0c"
	Oct 27 19:58:59 old-k8s-version-942644 kubelet[778]: I1027 19:58:59.858212     778 scope.go:117] "RemoveContainer" containerID="1a065c2e5d2d1c32440c57f3300210cd63be6bf11ab6fd06a70652ff3395da3f"
	Oct 27 19:58:59 old-k8s-version-942644 kubelet[778]: E1027 19:58:59.858499     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-h8ggd_kubernetes-dashboard(744e3140-27f8-4808-9cd5-97dae649dc0c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h8ggd" podUID="744e3140-27f8-4808-9cd5-97dae649dc0c"
	Oct 27 19:59:06 old-k8s-version-942644 kubelet[778]: I1027 19:59:06.910620     778 scope.go:117] "RemoveContainer" containerID="499383b8d8fc1d11df4a7905d477e6e6830cc86dd0bc67d3eb588824cec2dc07"
	Oct 27 19:59:06 old-k8s-version-942644 kubelet[778]: I1027 19:59:06.932881     778 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-5rbpv" podStartSLOduration=4.234066717 podCreationTimestamp="2025-10-27 19:58:47 +0000 UTC" firstStartedPulling="2025-10-27 19:58:48.892915174 +0000 UTC m=+19.440596029" lastFinishedPulling="2025-10-27 19:59:04.591670016 +0000 UTC m=+35.139350880" observedRunningTime="2025-10-27 19:59:04.936695541 +0000 UTC m=+35.484376397" watchObservedRunningTime="2025-10-27 19:59:06.932821568 +0000 UTC m=+37.480502432"
	Oct 27 19:59:11 old-k8s-version-942644 kubelet[778]: I1027 19:59:11.663172     778 scope.go:117] "RemoveContainer" containerID="1a065c2e5d2d1c32440c57f3300210cd63be6bf11ab6fd06a70652ff3395da3f"
	Oct 27 19:59:11 old-k8s-version-942644 kubelet[778]: I1027 19:59:11.929106     778 scope.go:117] "RemoveContainer" containerID="1a065c2e5d2d1c32440c57f3300210cd63be6bf11ab6fd06a70652ff3395da3f"
	Oct 27 19:59:11 old-k8s-version-942644 kubelet[778]: I1027 19:59:11.929370     778 scope.go:117] "RemoveContainer" containerID="e1a6d4b6855d2b349dde7afdcd071acf4cd94ca737e469cfa84893b5b71d4655"
	Oct 27 19:59:11 old-k8s-version-942644 kubelet[778]: E1027 19:59:11.929644     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-h8ggd_kubernetes-dashboard(744e3140-27f8-4808-9cd5-97dae649dc0c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h8ggd" podUID="744e3140-27f8-4808-9cd5-97dae649dc0c"
	Oct 27 19:59:18 old-k8s-version-942644 kubelet[778]: I1027 19:59:18.732863     778 scope.go:117] "RemoveContainer" containerID="e1a6d4b6855d2b349dde7afdcd071acf4cd94ca737e469cfa84893b5b71d4655"
	Oct 27 19:59:18 old-k8s-version-942644 kubelet[778]: E1027 19:59:18.733623     778 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-h8ggd_kubernetes-dashboard(744e3140-27f8-4808-9cd5-97dae649dc0c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h8ggd" podUID="744e3140-27f8-4808-9cd5-97dae649dc0c"
	Oct 27 19:59:30 old-k8s-version-942644 kubelet[778]: I1027 19:59:30.371646     778 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 27 19:59:30 old-k8s-version-942644 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 19:59:30 old-k8s-version-942644 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 19:59:30 old-k8s-version-942644 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [41c348ac8acf87523b5ca5a1bc063fd5887c49974ca78f9ddfd69cc2af77e23d] <==
	2025/10/27 19:59:04 Using namespace: kubernetes-dashboard
	2025/10/27 19:59:04 Using in-cluster config to connect to apiserver
	2025/10/27 19:59:04 Using secret token for csrf signing
	2025/10/27 19:59:04 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/27 19:59:04 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/27 19:59:04 Successful initial request to the apiserver, version: v1.28.0
	2025/10/27 19:59:04 Generating JWE encryption key
	2025/10/27 19:59:04 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/27 19:59:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/27 19:59:05 Initializing JWE encryption key from synchronized object
	2025/10/27 19:59:05 Creating in-cluster Sidecar client
	2025/10/27 19:59:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 19:59:05 Serving insecurely on HTTP port: 9090
	2025/10/27 19:59:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 19:59:04 Starting overwatch
	
	
	==> storage-provisioner [499383b8d8fc1d11df4a7905d477e6e6830cc86dd0bc67d3eb588824cec2dc07] <==
	I1027 19:58:36.564059       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1027 19:59:06.566361       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [955dc230362e096be0e14119979eeb4b516307eceab1bee2309c5c10aee85887] <==
	I1027 19:59:06.953698       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1027 19:59:06.967649       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 19:59:06.967825       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1027 19:59:24.371921       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 19:59:24.372112       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-942644_147149da-62b0-4ee4-81eb-fb51b192b049!
	I1027 19:59:24.374078       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ba540a69-e48f-48b4-a3e1-f6e693f646a8", APIVersion:"v1", ResourceVersion:"670", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-942644_147149da-62b0-4ee4-81eb-fb51b192b049 became leader
	I1027 19:59:24.473639       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-942644_147149da-62b0-4ee4-81eb-fb51b192b049!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-942644 -n old-k8s-version-942644
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-942644 -n old-k8s-version-942644: exit status 2 (464.668395ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-942644 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (8.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-300878 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-300878 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (296.795879ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T20:00:28Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-300878 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-300878 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-300878 describe deploy/metrics-server -n kube-system: exit status 1 (80.665621ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-300878 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-300878
helpers_test.go:243: (dbg) docker inspect no-preload-300878:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5f7533431bd6ad05927fc6ee4ffbe127ba4e775919d072603a24e3cd4e1d5d89",
	        "Created": "2025-10-27T19:59:03.085735227Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 452396,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T19:59:03.368947481Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/5f7533431bd6ad05927fc6ee4ffbe127ba4e775919d072603a24e3cd4e1d5d89/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5f7533431bd6ad05927fc6ee4ffbe127ba4e775919d072603a24e3cd4e1d5d89/hostname",
	        "HostsPath": "/var/lib/docker/containers/5f7533431bd6ad05927fc6ee4ffbe127ba4e775919d072603a24e3cd4e1d5d89/hosts",
	        "LogPath": "/var/lib/docker/containers/5f7533431bd6ad05927fc6ee4ffbe127ba4e775919d072603a24e3cd4e1d5d89/5f7533431bd6ad05927fc6ee4ffbe127ba4e775919d072603a24e3cd4e1d5d89-json.log",
	        "Name": "/no-preload-300878",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-300878:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-300878",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5f7533431bd6ad05927fc6ee4ffbe127ba4e775919d072603a24e3cd4e1d5d89",
	                "LowerDir": "/var/lib/docker/overlay2/9bbd17e120a72b16cae429750c48e4e412848fa0a221daef03291fd6af1df13e-init/diff:/var/lib/docker/overlay2/0567218bbe05e8b59b2aef7ad82032d6716040c1f9d9d73e7de8682101079f76/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9bbd17e120a72b16cae429750c48e4e412848fa0a221daef03291fd6af1df13e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9bbd17e120a72b16cae429750c48e4e412848fa0a221daef03291fd6af1df13e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9bbd17e120a72b16cae429750c48e4e412848fa0a221daef03291fd6af1df13e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-300878",
	                "Source": "/var/lib/docker/volumes/no-preload-300878/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-300878",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-300878",
	                "name.minikube.sigs.k8s.io": "no-preload-300878",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "522611ba4e702d70d5e4b540fe960d62c6be4b846a08607a7a63d456d68f6139",
	            "SandboxKey": "/var/run/docker/netns/522611ba4e70",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33418"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33419"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33422"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33420"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33421"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-300878": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6a:1a:89:46:15:65",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "12fd71527f5be91d352c6fcacb328f609f1124632115c17524de411b48d37139",
	                    "EndpointID": "f603e7587181094b96d3e7c694844ad4e0eb691119fb8caec7f3d95989a9db65",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-300878",
	                        "5f7533431bd6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-300878 -n no-preload-300878
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-300878 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-300878 logs -n 25: (1.232766488s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-750423 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-750423             │ jenkins │ v1.37.0 │ 27 Oct 25 19:54 UTC │                     │
	│ ssh     │ -p cilium-750423 sudo crio config                                                                                                                                                                                                             │ cilium-750423             │ jenkins │ v1.37.0 │ 27 Oct 25 19:54 UTC │                     │
	│ delete  │ -p cilium-750423                                                                                                                                                                                                                              │ cilium-750423             │ jenkins │ v1.37.0 │ 27 Oct 25 19:54 UTC │ 27 Oct 25 19:54 UTC │
	│ start   │ -p force-systemd-env-105360 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-105360  │ jenkins │ v1.37.0 │ 27 Oct 25 19:54 UTC │ 27 Oct 25 19:55 UTC │
	│ delete  │ -p force-systemd-env-105360                                                                                                                                                                                                                   │ force-systemd-env-105360  │ jenkins │ v1.37.0 │ 27 Oct 25 19:55 UTC │ 27 Oct 25 19:55 UTC │
	│ start   │ -p cert-expiration-280013 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-280013    │ jenkins │ v1.37.0 │ 27 Oct 25 19:55 UTC │ 27 Oct 25 19:55 UTC │
	│ delete  │ -p kubernetes-upgrade-524430                                                                                                                                                                                                                  │ kubernetes-upgrade-524430 │ jenkins │ v1.37.0 │ 27 Oct 25 19:56 UTC │ 27 Oct 25 19:56 UTC │
	│ start   │ -p cert-options-319273 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-319273       │ jenkins │ v1.37.0 │ 27 Oct 25 19:56 UTC │ 27 Oct 25 19:56 UTC │
	│ ssh     │ cert-options-319273 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-319273       │ jenkins │ v1.37.0 │ 27 Oct 25 19:56 UTC │ 27 Oct 25 19:56 UTC │
	│ ssh     │ -p cert-options-319273 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-319273       │ jenkins │ v1.37.0 │ 27 Oct 25 19:56 UTC │ 27 Oct 25 19:56 UTC │
	│ delete  │ -p cert-options-319273                                                                                                                                                                                                                        │ cert-options-319273       │ jenkins │ v1.37.0 │ 27 Oct 25 19:56 UTC │ 27 Oct 25 19:56 UTC │
	│ start   │ -p old-k8s-version-942644 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-942644    │ jenkins │ v1.37.0 │ 27 Oct 25 19:56 UTC │ 27 Oct 25 19:57 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-942644 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-942644    │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │                     │
	│ stop    │ -p old-k8s-version-942644 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-942644    │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │ 27 Oct 25 19:58 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-942644 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-942644    │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │ 27 Oct 25 19:58 UTC │
	│ start   │ -p old-k8s-version-942644 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-942644    │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │ 27 Oct 25 19:59 UTC │
	│ start   │ -p cert-expiration-280013 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-280013    │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │ 27 Oct 25 19:58 UTC │
	│ delete  │ -p cert-expiration-280013                                                                                                                                                                                                                     │ cert-expiration-280013    │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │ 27 Oct 25 19:59 UTC │
	│ start   │ -p no-preload-300878 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-300878         │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │ 27 Oct 25 20:00 UTC │
	│ image   │ old-k8s-version-942644 image list --format=json                                                                                                                                                                                               │ old-k8s-version-942644    │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │ 27 Oct 25 19:59 UTC │
	│ pause   │ -p old-k8s-version-942644 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-942644    │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │                     │
	│ delete  │ -p old-k8s-version-942644                                                                                                                                                                                                                     │ old-k8s-version-942644    │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │ 27 Oct 25 19:59 UTC │
	│ delete  │ -p old-k8s-version-942644                                                                                                                                                                                                                     │ old-k8s-version-942644    │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │ 27 Oct 25 19:59 UTC │
	│ start   │ -p embed-certs-629838 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-629838        │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-300878 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-300878         │ jenkins │ v1.37.0 │ 27 Oct 25 20:00 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 19:59:41
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 19:59:41.161571  456180 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:59:41.161818  456180 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:59:41.161848  456180 out.go:374] Setting ErrFile to fd 2...
	I1027 19:59:41.161880  456180 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:59:41.162277  456180 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 19:59:41.162862  456180 out.go:368] Setting JSON to false
	I1027 19:59:41.163971  456180 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9734,"bootTime":1761585448,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1027 19:59:41.164090  456180 start.go:141] virtualization:  
	I1027 19:59:41.168256  456180 out.go:179] * [embed-certs-629838] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 19:59:41.171729  456180 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 19:59:41.171803  456180 notify.go:220] Checking for updates...
	I1027 19:59:41.176349  456180 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 19:59:41.179604  456180 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 19:59:41.182770  456180 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-266035/.minikube
	I1027 19:59:41.185915  456180 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 19:59:41.189159  456180 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 19:59:41.192808  456180 config.go:182] Loaded profile config "no-preload-300878": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:59:41.192963  456180 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 19:59:41.231065  456180 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 19:59:41.231207  456180 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:59:41.316792  456180 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:60 SystemTime:2025-10-27 19:59:41.301874091 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 19:59:41.316968  456180 docker.go:318] overlay module found
	I1027 19:59:41.322063  456180 out.go:179] * Using the docker driver based on user configuration
	I1027 19:59:41.325028  456180 start.go:305] selected driver: docker
	I1027 19:59:41.325085  456180 start.go:925] validating driver "docker" against <nil>
	I1027 19:59:41.325113  456180 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 19:59:41.325897  456180 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:59:41.424790  456180 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:60 SystemTime:2025-10-27 19:59:41.410737862 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 19:59:41.424943  456180 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1027 19:59:41.425175  456180 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 19:59:41.428141  456180 out.go:179] * Using Docker driver with root privileges
	I1027 19:59:41.431087  456180 cni.go:84] Creating CNI manager for ""
	I1027 19:59:41.431164  456180 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:59:41.431175  456180 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 19:59:41.431258  456180 start.go:349] cluster config:
	{Name:embed-certs-629838 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-629838 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:59:41.434447  456180 out.go:179] * Starting "embed-certs-629838" primary control-plane node in "embed-certs-629838" cluster
	I1027 19:59:41.437411  456180 cache.go:123] Beginning downloading kic base image for docker with crio
	I1027 19:59:41.440372  456180 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 19:59:41.443298  456180 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:59:41.443375  456180 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1027 19:59:41.443386  456180 cache.go:58] Caching tarball of preloaded images
	I1027 19:59:41.443501  456180 preload.go:233] Found /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1027 19:59:41.443511  456180 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 19:59:41.443615  456180 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/config.json ...
	I1027 19:59:41.443632  456180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/config.json: {Name:mk81d6a06ce7a960e93ca01ebe98cded6da62f2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:59:41.443803  456180 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 19:59:41.464915  456180 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 19:59:41.464936  456180 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 19:59:41.464948  456180 cache.go:232] Successfully downloaded all kic artifacts
	I1027 19:59:41.464974  456180 start.go:360] acquireMachinesLock for embed-certs-629838: {Name:mk8675e8c935af9c23da71750794b4a71f97e11f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:59:41.465077  456180 start.go:364] duration metric: took 86.225µs to acquireMachinesLock for "embed-certs-629838"
	I1027 19:59:41.465108  456180 start.go:93] Provisioning new machine with config: &{Name:embed-certs-629838 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-629838 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 19:59:41.465179  456180 start.go:125] createHost starting for "" (driver="docker")
	I1027 19:59:42.617452  452092 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.002113407s
	I1027 19:59:42.623379  452092 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 19:59:42.623719  452092 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1027 19:59:42.624057  452092 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 19:59:42.624379  452092 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 19:59:45.988891  452092 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.364080902s
	I1027 19:59:41.468619  456180 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1027 19:59:41.468877  456180 start.go:159] libmachine.API.Create for "embed-certs-629838" (driver="docker")
	I1027 19:59:41.468931  456180 client.go:168] LocalClient.Create starting
	I1027 19:59:41.469005  456180 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem
	I1027 19:59:41.469038  456180 main.go:141] libmachine: Decoding PEM data...
	I1027 19:59:41.469053  456180 main.go:141] libmachine: Parsing certificate...
	I1027 19:59:41.469105  456180 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem
	I1027 19:59:41.469122  456180 main.go:141] libmachine: Decoding PEM data...
	I1027 19:59:41.469132  456180 main.go:141] libmachine: Parsing certificate...
	I1027 19:59:41.469489  456180 cli_runner.go:164] Run: docker network inspect embed-certs-629838 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1027 19:59:41.488610  456180 cli_runner.go:211] docker network inspect embed-certs-629838 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1027 19:59:41.488706  456180 network_create.go:284] running [docker network inspect embed-certs-629838] to gather additional debugging logs...
	I1027 19:59:41.488733  456180 cli_runner.go:164] Run: docker network inspect embed-certs-629838
	W1027 19:59:41.508493  456180 cli_runner.go:211] docker network inspect embed-certs-629838 returned with exit code 1
	I1027 19:59:41.508524  456180 network_create.go:287] error running [docker network inspect embed-certs-629838]: docker network inspect embed-certs-629838: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-629838 not found
	I1027 19:59:41.508539  456180 network_create.go:289] output of [docker network inspect embed-certs-629838]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-629838 not found
	
	** /stderr **
	I1027 19:59:41.508654  456180 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 19:59:41.528766  456180 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-74ee89127400 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:aa:c7:29:bd:7a:a9} reservation:<nil>}
	I1027 19:59:41.529129  456180 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-9c57ca829ac8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:0a:24:08:53:d6:07} reservation:<nil>}
	I1027 19:59:41.529347  456180 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b7a7c45fd176 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f6:52:e6:cd:c8:a6} reservation:<nil>}
	I1027 19:59:41.529772  456180 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019b5e90}
	I1027 19:59:41.529790  456180 network_create.go:124] attempt to create docker network embed-certs-629838 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1027 19:59:41.529850  456180 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-629838 embed-certs-629838
	I1027 19:59:41.588701  456180 network_create.go:108] docker network embed-certs-629838 192.168.76.0/24 created
	I1027 19:59:41.588731  456180 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-629838" container
	I1027 19:59:41.588820  456180 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1027 19:59:41.608659  456180 cli_runner.go:164] Run: docker volume create embed-certs-629838 --label name.minikube.sigs.k8s.io=embed-certs-629838 --label created_by.minikube.sigs.k8s.io=true
	I1027 19:59:41.631099  456180 oci.go:103] Successfully created a docker volume embed-certs-629838
	I1027 19:59:41.631184  456180 cli_runner.go:164] Run: docker run --rm --name embed-certs-629838-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-629838 --entrypoint /usr/bin/test -v embed-certs-629838:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1027 19:59:42.277965  456180 oci.go:107] Successfully prepared a docker volume embed-certs-629838
	I1027 19:59:42.278022  456180 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:59:42.278046  456180 kic.go:194] Starting extracting preloaded images to volume ...
	I1027 19:59:42.278143  456180 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-629838:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1027 19:59:49.659858  452092 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 7.034672404s
	I1027 19:59:50.626884  452092 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 8.002414372s
	I1027 19:59:50.648457  452092 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 19:59:50.662422  452092 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 19:59:50.677155  452092 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 19:59:50.677364  452092 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-300878 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 19:59:50.689625  452092 kubeadm.go:318] [bootstrap-token] Using token: oaht8k.u8agmshcqv6pqlu6
	I1027 19:59:50.692586  452092 out.go:252]   - Configuring RBAC rules ...
	I1027 19:59:50.692749  452092 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 19:59:50.697808  452092 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 19:59:50.707435  452092 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 19:59:50.712832  452092 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 19:59:50.718925  452092 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 19:59:50.723022  452092 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 19:59:51.037954  452092 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 19:59:47.044537  456180 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-629838:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.766351347s)
	I1027 19:59:47.044567  456180 kic.go:203] duration metric: took 4.766518013s to extract preloaded images to volume ...
	W1027 19:59:47.044729  456180 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1027 19:59:47.044851  456180 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1027 19:59:47.159567  456180 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-629838 --name embed-certs-629838 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-629838 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-629838 --network embed-certs-629838 --ip 192.168.76.2 --volume embed-certs-629838:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1027 19:59:47.586879  456180 cli_runner.go:164] Run: docker container inspect embed-certs-629838 --format={{.State.Running}}
	I1027 19:59:47.612605  456180 cli_runner.go:164] Run: docker container inspect embed-certs-629838 --format={{.State.Status}}
	I1027 19:59:47.649246  456180 cli_runner.go:164] Run: docker exec embed-certs-629838 stat /var/lib/dpkg/alternatives/iptables
	I1027 19:59:47.718217  456180 oci.go:144] the created container "embed-certs-629838" has a running status.
	I1027 19:59:47.718259  456180 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21801-266035/.minikube/machines/embed-certs-629838/id_rsa...
	I1027 19:59:48.844994  456180 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21801-266035/.minikube/machines/embed-certs-629838/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1027 19:59:48.868812  456180 cli_runner.go:164] Run: docker container inspect embed-certs-629838 --format={{.State.Status}}
	I1027 19:59:48.898625  456180 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1027 19:59:48.898644  456180 kic_runner.go:114] Args: [docker exec --privileged embed-certs-629838 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1027 19:59:48.965885  456180 cli_runner.go:164] Run: docker container inspect embed-certs-629838 --format={{.State.Status}}
	I1027 19:59:48.985074  456180 machine.go:93] provisionDockerMachine start ...
	I1027 19:59:48.985159  456180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629838
	I1027 19:59:49.004691  456180 main.go:141] libmachine: Using SSH client type: native
	I1027 19:59:49.005029  456180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1027 19:59:49.005038  456180 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 19:59:49.007484  456180 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1027 19:59:51.487852  452092 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1027 19:59:52.038635  452092 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1027 19:59:52.040008  452092 kubeadm.go:318] 
	I1027 19:59:52.040095  452092 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1027 19:59:52.040102  452092 kubeadm.go:318] 
	I1027 19:59:52.040184  452092 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1027 19:59:52.040189  452092 kubeadm.go:318] 
	I1027 19:59:52.040215  452092 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1027 19:59:52.040276  452092 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 19:59:52.040329  452092 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 19:59:52.040334  452092 kubeadm.go:318] 
	I1027 19:59:52.040391  452092 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1027 19:59:52.040396  452092 kubeadm.go:318] 
	I1027 19:59:52.040446  452092 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 19:59:52.040450  452092 kubeadm.go:318] 
	I1027 19:59:52.040505  452092 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1027 19:59:52.040583  452092 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 19:59:52.040654  452092 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 19:59:52.040660  452092 kubeadm.go:318] 
	I1027 19:59:52.040748  452092 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 19:59:52.040830  452092 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1027 19:59:52.040835  452092 kubeadm.go:318] 
	I1027 19:59:52.041169  452092 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token oaht8k.u8agmshcqv6pqlu6 \
	I1027 19:59:52.041284  452092 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:9473f077605f6866eb08c8a712af7c56a0deb8dc2a907ec8cbb4b8ae396e8f1f \
	I1027 19:59:52.041307  452092 kubeadm.go:318] 	--control-plane 
	I1027 19:59:52.041312  452092 kubeadm.go:318] 
	I1027 19:59:52.041401  452092 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1027 19:59:52.041405  452092 kubeadm.go:318] 
	I1027 19:59:52.041491  452092 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token oaht8k.u8agmshcqv6pqlu6 \
	I1027 19:59:52.041597  452092 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:9473f077605f6866eb08c8a712af7c56a0deb8dc2a907ec8cbb4b8ae396e8f1f 
	I1027 19:59:52.046868  452092 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1027 19:59:52.047324  452092 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1027 19:59:52.047493  452092 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 19:59:52.047530  452092 cni.go:84] Creating CNI manager for ""
	I1027 19:59:52.047564  452092 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:59:52.050494  452092 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1027 19:59:52.179016  456180 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-629838
	
	I1027 19:59:52.179053  456180 ubuntu.go:182] provisioning hostname "embed-certs-629838"
	I1027 19:59:52.179128  456180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629838
	I1027 19:59:52.202812  456180 main.go:141] libmachine: Using SSH client type: native
	I1027 19:59:52.203159  456180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1027 19:59:52.203177  456180 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-629838 && echo "embed-certs-629838" | sudo tee /etc/hostname
	I1027 19:59:52.400520  456180 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-629838
	
	I1027 19:59:52.400615  456180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629838
	I1027 19:59:52.419970  456180 main.go:141] libmachine: Using SSH client type: native
	I1027 19:59:52.420275  456180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1027 19:59:52.420300  456180 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-629838' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-629838/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-629838' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 19:59:52.587847  456180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 19:59:52.587871  456180 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21801-266035/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-266035/.minikube}
	I1027 19:59:52.587895  456180 ubuntu.go:190] setting up certificates
	I1027 19:59:52.587907  456180 provision.go:84] configureAuth start
	I1027 19:59:52.587966  456180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-629838
	I1027 19:59:52.624102  456180 provision.go:143] copyHostCerts
	I1027 19:59:52.624184  456180 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem, removing ...
	I1027 19:59:52.624200  456180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem
	I1027 19:59:52.624276  456180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem (1078 bytes)
	I1027 19:59:52.624367  456180 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem, removing ...
	I1027 19:59:52.624378  456180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem
	I1027 19:59:52.624404  456180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem (1123 bytes)
	I1027 19:59:52.624460  456180 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem, removing ...
	I1027 19:59:52.624470  456180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem
	I1027 19:59:52.624495  456180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem (1675 bytes)
	I1027 19:59:52.624544  456180 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem org=jenkins.embed-certs-629838 san=[127.0.0.1 192.168.76.2 embed-certs-629838 localhost minikube]
	I1027 19:59:53.295842  456180 provision.go:177] copyRemoteCerts
	I1027 19:59:53.295914  456180 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 19:59:53.295963  456180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629838
	I1027 19:59:53.316092  456180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/embed-certs-629838/id_rsa Username:docker}
	I1027 19:59:53.423154  456180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 19:59:53.440631  456180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1027 19:59:53.459247  456180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1027 19:59:53.481192  456180 provision.go:87] duration metric: took 893.260099ms to configureAuth
	I1027 19:59:53.481218  456180 ubuntu.go:206] setting minikube options for container-runtime
	I1027 19:59:53.481417  456180 config.go:182] Loaded profile config "embed-certs-629838": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:59:53.481532  456180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629838
	I1027 19:59:53.499036  456180 main.go:141] libmachine: Using SSH client type: native
	I1027 19:59:53.499350  456180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1027 19:59:53.499371  456180 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 19:59:53.814149  456180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 19:59:53.814214  456180 machine.go:96] duration metric: took 4.829119794s to provisionDockerMachine
	I1027 19:59:53.814239  456180 client.go:171] duration metric: took 12.345301051s to LocalClient.Create
	I1027 19:59:53.814288  456180 start.go:167] duration metric: took 12.345400723s to libmachine.API.Create "embed-certs-629838"
	I1027 19:59:53.814314  456180 start.go:293] postStartSetup for "embed-certs-629838" (driver="docker")
	I1027 19:59:53.814340  456180 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 19:59:53.814435  456180 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 19:59:53.814512  456180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629838
	I1027 19:59:53.842586  456180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/embed-certs-629838/id_rsa Username:docker}
	I1027 19:59:53.955759  456180 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 19:59:53.959208  456180 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 19:59:53.959238  456180 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 19:59:53.959249  456180 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-266035/.minikube/addons for local assets ...
	I1027 19:59:53.959304  456180 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-266035/.minikube/files for local assets ...
	I1027 19:59:53.959389  456180 filesync.go:149] local asset: /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem -> 2678802.pem in /etc/ssl/certs
	I1027 19:59:53.959494  456180 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 19:59:53.969747  456180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem --> /etc/ssl/certs/2678802.pem (1708 bytes)
	I1027 19:59:53.988097  456180 start.go:296] duration metric: took 173.753596ms for postStartSetup
	I1027 19:59:53.988569  456180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-629838
	I1027 19:59:54.012247  456180 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/config.json ...
	I1027 19:59:54.012547  456180 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 19:59:54.012600  456180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629838
	I1027 19:59:54.035235  456180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/embed-certs-629838/id_rsa Username:docker}
	I1027 19:59:54.140646  456180 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 19:59:54.145547  456180 start.go:128] duration metric: took 12.680352717s to createHost
	I1027 19:59:54.145574  456180 start.go:83] releasing machines lock for "embed-certs-629838", held for 12.680487794s
	I1027 19:59:54.145643  456180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-629838
	I1027 19:59:54.164193  456180 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 19:59:54.164281  456180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629838
	I1027 19:59:54.164760  456180 ssh_runner.go:195] Run: cat /version.json
	I1027 19:59:54.164819  456180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629838
	I1027 19:59:54.192069  456180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/embed-certs-629838/id_rsa Username:docker}
	I1027 19:59:54.204766  456180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/embed-certs-629838/id_rsa Username:docker}
	I1027 19:59:54.404506  456180 ssh_runner.go:195] Run: systemctl --version
	I1027 19:59:54.411119  456180 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 19:59:54.448297  456180 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 19:59:54.452884  456180 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 19:59:54.452959  456180 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 19:59:54.485280  456180 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1027 19:59:54.485301  456180 start.go:495] detecting cgroup driver to use...
	I1027 19:59:54.485337  456180 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1027 19:59:54.485401  456180 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 19:59:54.505453  456180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 19:59:54.519465  456180 docker.go:218] disabling cri-docker service (if available) ...
	I1027 19:59:54.519527  456180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 19:59:54.537732  456180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 19:59:54.557318  456180 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 19:59:54.680627  456180 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 19:59:54.819455  456180 docker.go:234] disabling docker service ...
	I1027 19:59:54.819606  456180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 19:59:54.845704  456180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 19:59:54.860793  456180 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 19:59:54.995750  456180 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 19:59:55.123864  456180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 19:59:55.137401  456180 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 19:59:55.152073  456180 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 19:59:55.152191  456180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:59:55.161590  456180 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 19:59:55.161687  456180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:59:55.172312  456180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:59:55.180965  456180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:59:55.190667  456180 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 19:59:55.198861  456180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:59:55.208059  456180 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:59:55.225975  456180 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:59:55.235261  456180 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 19:59:55.242578  456180 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 19:59:55.249886  456180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:59:55.391235  456180 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 19:59:55.537283  456180 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 19:59:55.537349  456180 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 19:59:55.541218  456180 start.go:563] Will wait 60s for crictl version
	I1027 19:59:55.541328  456180 ssh_runner.go:195] Run: which crictl
	I1027 19:59:55.544660  456180 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 19:59:55.577877  456180 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 19:59:55.578040  456180 ssh_runner.go:195] Run: crio --version
	I1027 19:59:55.607981  456180 ssh_runner.go:195] Run: crio --version
	I1027 19:59:55.643361  456180 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 19:59:52.053500  452092 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1027 19:59:52.058237  452092 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 19:59:52.058262  452092 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1027 19:59:52.085939  452092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 19:59:52.485155  452092 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 19:59:52.485239  452092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-300878 minikube.k8s.io/updated_at=2025_10_27T19_59_52_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f minikube.k8s.io/name=no-preload-300878 minikube.k8s.io/primary=true
	I1027 19:59:52.485267  452092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:59:52.524988  452092 ops.go:34] apiserver oom_adj: -16
	I1027 19:59:52.792342  452092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:59:53.293130  452092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:59:53.792887  452092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:59:54.292957  452092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:59:54.792873  452092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:59:55.293262  452092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:59:55.793220  452092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:59:55.646248  456180 cli_runner.go:164] Run: docker network inspect embed-certs-629838 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 19:59:55.661768  456180 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1027 19:59:55.666592  456180 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 19:59:55.677465  456180 kubeadm.go:883] updating cluster {Name:embed-certs-629838 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-629838 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 19:59:55.677589  456180 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:59:55.677650  456180 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 19:59:55.708282  456180 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 19:59:55.708305  456180 crio.go:433] Images already preloaded, skipping extraction
	I1027 19:59:55.708370  456180 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 19:59:55.743243  456180 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 19:59:55.743268  456180 cache_images.go:85] Images are preloaded, skipping loading
	I1027 19:59:55.743277  456180 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1027 19:59:55.743399  456180 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-629838 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-629838 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 19:59:55.743485  456180 ssh_runner.go:195] Run: crio config
	I1027 19:59:55.824357  456180 cni.go:84] Creating CNI manager for ""
	I1027 19:59:55.824429  456180 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:59:55.824484  456180 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 19:59:55.824553  456180 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-629838 NodeName:embed-certs-629838 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 19:59:55.824785  456180 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-629838"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 19:59:55.824936  456180 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 19:59:55.834200  456180 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 19:59:55.834324  456180 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 19:59:55.842645  456180 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1027 19:59:55.856406  456180 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 19:59:55.874207  456180 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1027 19:59:55.890724  456180 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1027 19:59:55.894771  456180 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 19:59:55.904868  456180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:59:56.029678  456180 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:59:56.047513  456180 certs.go:69] Setting up /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838 for IP: 192.168.76.2
	I1027 19:59:56.047591  456180 certs.go:195] generating shared ca certs ...
	I1027 19:59:56.047625  456180 certs.go:227] acquiring lock for ca certs: {Name:mk172548fc54811b51040cc0201d01eb4b3dd19b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:59:56.047817  456180 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21801-266035/.minikube/ca.key
	I1027 19:59:56.047902  456180 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.key
	I1027 19:59:56.047937  456180 certs.go:257] generating profile certs ...
	I1027 19:59:56.048018  456180 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/client.key
	I1027 19:59:56.048057  456180 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/client.crt with IP's: []
	I1027 19:59:56.293080  452092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:59:56.792942  452092 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:59:57.097686  452092 kubeadm.go:1113] duration metric: took 4.61244919s to wait for elevateKubeSystemPrivileges
	I1027 19:59:57.097727  452092 kubeadm.go:402] duration metric: took 25.591825208s to StartCluster
	I1027 19:59:57.097748  452092 settings.go:142] acquiring lock: {Name:mk49b9e2e9b7901c6b51e89b8b1e8ee7f9e88107 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:59:57.097815  452092 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 19:59:57.098524  452092 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/kubeconfig: {Name:mk0a03a594f8d9f9199b1c7cff7c550cec8414c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:59:57.098763  452092 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 19:59:57.098869  452092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 19:59:57.099140  452092 config.go:182] Loaded profile config "no-preload-300878": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:59:57.099190  452092 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 19:59:57.099255  452092 addons.go:69] Setting storage-provisioner=true in profile "no-preload-300878"
	I1027 19:59:57.099283  452092 addons.go:238] Setting addon storage-provisioner=true in "no-preload-300878"
	I1027 19:59:57.099308  452092 host.go:66] Checking if "no-preload-300878" exists ...
	I1027 19:59:57.100021  452092 cli_runner.go:164] Run: docker container inspect no-preload-300878 --format={{.State.Status}}
	I1027 19:59:57.100262  452092 addons.go:69] Setting default-storageclass=true in profile "no-preload-300878"
	I1027 19:59:57.100288  452092 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-300878"
	I1027 19:59:57.100565  452092 cli_runner.go:164] Run: docker container inspect no-preload-300878 --format={{.State.Status}}
	I1027 19:59:57.102391  452092 out.go:179] * Verifying Kubernetes components...
	I1027 19:59:57.105718  452092 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:59:57.170108  452092 addons.go:238] Setting addon default-storageclass=true in "no-preload-300878"
	I1027 19:59:57.170148  452092 host.go:66] Checking if "no-preload-300878" exists ...
	I1027 19:59:57.170617  452092 cli_runner.go:164] Run: docker container inspect no-preload-300878 --format={{.State.Status}}
	I1027 19:59:57.183034  452092 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:59:57.189045  452092 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:59:57.189066  452092 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 19:59:57.189126  452092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-300878
	I1027 19:59:57.258726  452092 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 19:59:57.258753  452092 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 19:59:57.258837  452092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-300878
	I1027 19:59:57.292570  452092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/no-preload-300878/id_rsa Username:docker}
	I1027 19:59:57.343266  452092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/no-preload-300878/id_rsa Username:docker}
	I1027 19:59:57.903423  452092 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:59:57.903597  452092 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 19:59:57.930226  452092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:59:58.091648  452092 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 19:59:59.464042  452092 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.560583418s)
	I1027 19:59:59.464834  452092 node_ready.go:35] waiting up to 6m0s for node "no-preload-300878" to be "Ready" ...
	I1027 19:59:59.485464  452092 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.580715142s)
	I1027 19:59:59.485496  452092 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1027 19:59:59.991827  452092 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-300878" context rescaled to 1 replicas
	I1027 20:00:00.250016  452092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.319620383s)
	I1027 20:00:00.250097  452092 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.158415174s)
	I1027 20:00:00.328288  452092 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1027 19:59:56.835381  456180 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/client.crt ...
	I1027 19:59:56.835452  456180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/client.crt: {Name:mkeb543df49a3aff77b595157be62b7969d446a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:59:56.835656  456180 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/client.key ...
	I1027 19:59:56.835706  456180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/client.key: {Name:mk03469ca0d4983188e10673c0e0e6f4844ae49f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:59:56.835818  456180 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/apiserver.key.4ab968a1
	I1027 19:59:56.835868  456180 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/apiserver.crt.4ab968a1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1027 19:59:57.099937  456180 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/apiserver.crt.4ab968a1 ...
	I1027 19:59:57.099981  456180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/apiserver.crt.4ab968a1: {Name:mkff05b961394fbc8ed0f0c0848822e27dcb8e8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:59:57.100158  456180 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/apiserver.key.4ab968a1 ...
	I1027 19:59:57.100185  456180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/apiserver.key.4ab968a1: {Name:mk480366bc74211811ef814cfe0c6bacf64a5206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:59:57.101801  456180 certs.go:382] copying /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/apiserver.crt.4ab968a1 -> /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/apiserver.crt
	I1027 19:59:57.101928  456180 certs.go:386] copying /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/apiserver.key.4ab968a1 -> /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/apiserver.key
	I1027 19:59:57.102017  456180 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/proxy-client.key
	I1027 19:59:57.102051  456180 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/proxy-client.crt with IP's: []
	I1027 19:59:59.130746  456180 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/proxy-client.crt ...
	I1027 19:59:59.130779  456180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/proxy-client.crt: {Name:mkfec23bc81ee8fdb4d992499123fad23f45b57e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:59:59.130963  456180 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/proxy-client.key ...
	I1027 19:59:59.131102  456180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/proxy-client.key: {Name:mk42052188174aa4ba25d48831fd7d3ec922a4e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:59:59.131322  456180 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880.pem (1338 bytes)
	W1027 19:59:59.131365  456180 certs.go:480] ignoring /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880_empty.pem, impossibly tiny 0 bytes
	I1027 19:59:59.131377  456180 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 19:59:59.131405  456180 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem (1078 bytes)
	I1027 19:59:59.131432  456180 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem (1123 bytes)
	I1027 19:59:59.131458  456180 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem (1675 bytes)
	I1027 19:59:59.131506  456180 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem (1708 bytes)
	I1027 19:59:59.132124  456180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 19:59:59.166260  456180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 19:59:59.200664  456180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 19:59:59.236352  456180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 19:59:59.283998  456180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1027 19:59:59.339368  456180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1027 19:59:59.402606  456180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 19:59:59.441868  456180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 19:59:59.489194  456180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 19:59:59.529785  456180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880.pem --> /usr/share/ca-certificates/267880.pem (1338 bytes)
	I1027 19:59:59.560053  456180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem --> /usr/share/ca-certificates/2678802.pem (1708 bytes)
	I1027 19:59:59.590231  456180 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 19:59:59.610658  456180 ssh_runner.go:195] Run: openssl version
	I1027 19:59:59.619494  456180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 19:59:59.630430  456180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:59:59.637814  456180 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:59:59.638002  456180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:59:59.699934  456180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 19:59:59.712058  456180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/267880.pem && ln -fs /usr/share/ca-certificates/267880.pem /etc/ssl/certs/267880.pem"
	I1027 19:59:59.722733  456180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/267880.pem
	I1027 19:59:59.727085  456180 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 19:04 /usr/share/ca-certificates/267880.pem
	I1027 19:59:59.727246  456180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/267880.pem
	I1027 19:59:59.784117  456180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/267880.pem /etc/ssl/certs/51391683.0"
	I1027 19:59:59.795132  456180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2678802.pem && ln -fs /usr/share/ca-certificates/2678802.pem /etc/ssl/certs/2678802.pem"
	I1027 19:59:59.806013  456180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2678802.pem
	I1027 19:59:59.816059  456180 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 19:04 /usr/share/ca-certificates/2678802.pem
	I1027 19:59:59.816206  456180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2678802.pem
	I1027 19:59:59.873458  456180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2678802.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 19:59:59.883887  456180 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 19:59:59.889238  456180 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 19:59:59.889408  456180 kubeadm.go:400] StartCluster: {Name:embed-certs-629838 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-629838 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:59:59.889547  456180 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 19:59:59.889651  456180 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 19:59:59.931052  456180 cri.go:89] found id: ""
	I1027 19:59:59.931281  456180 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 19:59:59.941236  456180 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 19:59:59.950352  456180 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1027 19:59:59.950510  456180 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 19:59:59.960889  456180 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 19:59:59.960989  456180 kubeadm.go:157] found existing configuration files:
	
	I1027 19:59:59.961092  456180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 19:59:59.969796  456180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 19:59:59.969928  456180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 19:59:59.981420  456180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 19:59:59.989762  456180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 19:59:59.989884  456180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 19:59:59.997511  456180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 20:00:00.012311  456180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 20:00:00.012405  456180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 20:00:00.024646  456180 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 20:00:00.069086  456180 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 20:00:00.069222  456180 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 20:00:00.133750  456180 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 20:00:00.393996  456180 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1027 20:00:00.394131  456180 kubeadm.go:318] [preflight] Running pre-flight checks
	I1027 20:00:00.547452  456180 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1027 20:00:00.547649  456180 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1027 20:00:00.547758  456180 kubeadm.go:318] OS: Linux
	I1027 20:00:00.547837  456180 kubeadm.go:318] CGROUPS_CPU: enabled
	I1027 20:00:00.547919  456180 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1027 20:00:00.548018  456180 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1027 20:00:00.548106  456180 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1027 20:00:00.548193  456180 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1027 20:00:00.548880  456180 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1027 20:00:00.549010  456180 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1027 20:00:00.549100  456180 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1027 20:00:00.549236  456180 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1027 20:00:00.713661  456180 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 20:00:00.713893  456180 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 20:00:00.714035  456180 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 20:00:00.741106  456180 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 20:00:00.331250  452092 addons.go:514] duration metric: took 3.232023111s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1027 20:00:00.746177  456180 out.go:252]   - Generating certificates and keys ...
	I1027 20:00:00.746371  456180 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1027 20:00:00.746503  456180 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1027 20:00:01.151440  456180 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	W1027 20:00:01.483849  452092 node_ready.go:57] node "no-preload-300878" has "Ready":"False" status (will retry)
	W1027 20:00:03.968623  452092 node_ready.go:57] node "no-preload-300878" has "Ready":"False" status (will retry)
	I1027 20:00:01.844920  456180 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1027 20:00:02.393184  456180 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1027 20:00:02.485728  456180 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1027 20:00:02.844960  456180 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1027 20:00:02.845627  456180 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-629838 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1027 20:00:03.482344  456180 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1027 20:00:03.482908  456180 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-629838 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1027 20:00:03.773678  456180 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 20:00:04.094838  456180 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 20:00:04.599332  456180 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1027 20:00:04.599622  456180 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 20:00:05.054967  456180 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 20:00:06.552455  456180 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 20:00:07.155102  456180 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 20:00:08.153886  456180 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 20:00:08.394289  456180 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 20:00:08.395133  456180 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 20:00:08.397940  456180 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1027 20:00:06.468525  452092 node_ready.go:57] node "no-preload-300878" has "Ready":"False" status (will retry)
	W1027 20:00:08.967832  452092 node_ready.go:57] node "no-preload-300878" has "Ready":"False" status (will retry)
	W1027 20:00:10.968305  452092 node_ready.go:57] node "no-preload-300878" has "Ready":"False" status (will retry)
	I1027 20:00:08.401155  456180 out.go:252]   - Booting up control plane ...
	I1027 20:00:08.401265  456180 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 20:00:08.401346  456180 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 20:00:08.401416  456180 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 20:00:08.418558  456180 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 20:00:08.418675  456180 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 20:00:08.427485  456180 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 20:00:08.427981  456180 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 20:00:08.428324  456180 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1027 20:00:08.574268  456180 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 20:00:08.574397  456180 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 20:00:09.575898  456180 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.002071535s
	I1027 20:00:09.579896  456180 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 20:00:09.580032  456180 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1027 20:00:09.580167  456180 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 20:00:09.580261  456180 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1027 20:00:12.968509  452092 node_ready.go:57] node "no-preload-300878" has "Ready":"False" status (will retry)
	I1027 20:00:14.471499  452092 node_ready.go:49] node "no-preload-300878" is "Ready"
	I1027 20:00:14.471525  452092 node_ready.go:38] duration metric: took 15.006672073s for node "no-preload-300878" to be "Ready" ...
	I1027 20:00:14.471538  452092 api_server.go:52] waiting for apiserver process to appear ...
	I1027 20:00:14.471595  452092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 20:00:14.497999  452092 api_server.go:72] duration metric: took 17.399195238s to wait for apiserver process to appear ...
	I1027 20:00:14.498021  452092 api_server.go:88] waiting for apiserver healthz status ...
	I1027 20:00:14.498041  452092 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1027 20:00:14.514245  452092 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1027 20:00:14.516829  452092 api_server.go:141] control plane version: v1.34.1
	I1027 20:00:14.516853  452092 api_server.go:131] duration metric: took 18.82429ms to wait for apiserver health ...
	I1027 20:00:14.516861  452092 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 20:00:14.522302  452092 system_pods.go:59] 8 kube-system pods found
	I1027 20:00:14.522333  452092 system_pods.go:61] "coredns-66bc5c9577-jlg4z" [9692f7a1-291c-4c66-abc3-e0c78f66bc4c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:00:14.522340  452092 system_pods.go:61] "etcd-no-preload-300878" [e8f1c131-e6a0-47af-8491-bbdfbd04eb72] Running
	I1027 20:00:14.522346  452092 system_pods.go:61] "kindnet-smnp2" [cc388f93-6d32-42d4-b690-08e5713d67c1] Running
	I1027 20:00:14.522351  452092 system_pods.go:61] "kube-apiserver-no-preload-300878" [4e5fb347-6c19-4da9-b4df-26f45bf1d362] Running
	I1027 20:00:14.522355  452092 system_pods.go:61] "kube-controller-manager-no-preload-300878" [103a9dbe-a475-46c9-8450-1708dc7bded1] Running
	I1027 20:00:14.522361  452092 system_pods.go:61] "kube-proxy-wpv4w" [c80663df-d0c2-41dd-a3ec-f4d6652536c8] Running
	I1027 20:00:14.522365  452092 system_pods.go:61] "kube-scheduler-no-preload-300878" [6c301a1e-1d96-4826-b2c9-eb84d99e6a46] Running
	I1027 20:00:14.522371  452092 system_pods.go:61] "storage-provisioner" [54924ba0-8604-4333-8e8f-45bac06fffde] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 20:00:14.522377  452092 system_pods.go:74] duration metric: took 5.509818ms to wait for pod list to return data ...
	I1027 20:00:14.522384  452092 default_sa.go:34] waiting for default service account to be created ...
	I1027 20:00:14.526576  452092 default_sa.go:45] found service account: "default"
	I1027 20:00:14.526646  452092 default_sa.go:55] duration metric: took 4.25552ms for default service account to be created ...
	I1027 20:00:14.526675  452092 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 20:00:14.531532  452092 system_pods.go:86] 8 kube-system pods found
	I1027 20:00:14.531561  452092 system_pods.go:89] "coredns-66bc5c9577-jlg4z" [9692f7a1-291c-4c66-abc3-e0c78f66bc4c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:00:14.531567  452092 system_pods.go:89] "etcd-no-preload-300878" [e8f1c131-e6a0-47af-8491-bbdfbd04eb72] Running
	I1027 20:00:14.531574  452092 system_pods.go:89] "kindnet-smnp2" [cc388f93-6d32-42d4-b690-08e5713d67c1] Running
	I1027 20:00:14.531579  452092 system_pods.go:89] "kube-apiserver-no-preload-300878" [4e5fb347-6c19-4da9-b4df-26f45bf1d362] Running
	I1027 20:00:14.531583  452092 system_pods.go:89] "kube-controller-manager-no-preload-300878" [103a9dbe-a475-46c9-8450-1708dc7bded1] Running
	I1027 20:00:14.531588  452092 system_pods.go:89] "kube-proxy-wpv4w" [c80663df-d0c2-41dd-a3ec-f4d6652536c8] Running
	I1027 20:00:14.531592  452092 system_pods.go:89] "kube-scheduler-no-preload-300878" [6c301a1e-1d96-4826-b2c9-eb84d99e6a46] Running
	I1027 20:00:14.531597  452092 system_pods.go:89] "storage-provisioner" [54924ba0-8604-4333-8e8f-45bac06fffde] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 20:00:14.531625  452092 retry.go:31] will retry after 302.701711ms: missing components: kube-dns
	I1027 20:00:14.846720  452092 system_pods.go:86] 8 kube-system pods found
	I1027 20:00:14.846770  452092 system_pods.go:89] "coredns-66bc5c9577-jlg4z" [9692f7a1-291c-4c66-abc3-e0c78f66bc4c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:00:14.846778  452092 system_pods.go:89] "etcd-no-preload-300878" [e8f1c131-e6a0-47af-8491-bbdfbd04eb72] Running
	I1027 20:00:14.846785  452092 system_pods.go:89] "kindnet-smnp2" [cc388f93-6d32-42d4-b690-08e5713d67c1] Running
	I1027 20:00:14.846789  452092 system_pods.go:89] "kube-apiserver-no-preload-300878" [4e5fb347-6c19-4da9-b4df-26f45bf1d362] Running
	I1027 20:00:14.846794  452092 system_pods.go:89] "kube-controller-manager-no-preload-300878" [103a9dbe-a475-46c9-8450-1708dc7bded1] Running
	I1027 20:00:14.846797  452092 system_pods.go:89] "kube-proxy-wpv4w" [c80663df-d0c2-41dd-a3ec-f4d6652536c8] Running
	I1027 20:00:14.846801  452092 system_pods.go:89] "kube-scheduler-no-preload-300878" [6c301a1e-1d96-4826-b2c9-eb84d99e6a46] Running
	I1027 20:00:14.846807  452092 system_pods.go:89] "storage-provisioner" [54924ba0-8604-4333-8e8f-45bac06fffde] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 20:00:14.846830  452092 retry.go:31] will retry after 359.188792ms: missing components: kube-dns
	I1027 20:00:15.209991  452092 system_pods.go:86] 8 kube-system pods found
	I1027 20:00:15.210021  452092 system_pods.go:89] "coredns-66bc5c9577-jlg4z" [9692f7a1-291c-4c66-abc3-e0c78f66bc4c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:00:15.210027  452092 system_pods.go:89] "etcd-no-preload-300878" [e8f1c131-e6a0-47af-8491-bbdfbd04eb72] Running
	I1027 20:00:15.210033  452092 system_pods.go:89] "kindnet-smnp2" [cc388f93-6d32-42d4-b690-08e5713d67c1] Running
	I1027 20:00:15.210038  452092 system_pods.go:89] "kube-apiserver-no-preload-300878" [4e5fb347-6c19-4da9-b4df-26f45bf1d362] Running
	I1027 20:00:15.210042  452092 system_pods.go:89] "kube-controller-manager-no-preload-300878" [103a9dbe-a475-46c9-8450-1708dc7bded1] Running
	I1027 20:00:15.210046  452092 system_pods.go:89] "kube-proxy-wpv4w" [c80663df-d0c2-41dd-a3ec-f4d6652536c8] Running
	I1027 20:00:15.210050  452092 system_pods.go:89] "kube-scheduler-no-preload-300878" [6c301a1e-1d96-4826-b2c9-eb84d99e6a46] Running
	I1027 20:00:15.210055  452092 system_pods.go:89] "storage-provisioner" [54924ba0-8604-4333-8e8f-45bac06fffde] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 20:00:15.210069  452092 retry.go:31] will retry after 470.547104ms: missing components: kube-dns
	I1027 20:00:15.685655  452092 system_pods.go:86] 8 kube-system pods found
	I1027 20:00:15.685688  452092 system_pods.go:89] "coredns-66bc5c9577-jlg4z" [9692f7a1-291c-4c66-abc3-e0c78f66bc4c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:00:15.685695  452092 system_pods.go:89] "etcd-no-preload-300878" [e8f1c131-e6a0-47af-8491-bbdfbd04eb72] Running
	I1027 20:00:15.685701  452092 system_pods.go:89] "kindnet-smnp2" [cc388f93-6d32-42d4-b690-08e5713d67c1] Running
	I1027 20:00:15.685705  452092 system_pods.go:89] "kube-apiserver-no-preload-300878" [4e5fb347-6c19-4da9-b4df-26f45bf1d362] Running
	I1027 20:00:15.685710  452092 system_pods.go:89] "kube-controller-manager-no-preload-300878" [103a9dbe-a475-46c9-8450-1708dc7bded1] Running
	I1027 20:00:15.685713  452092 system_pods.go:89] "kube-proxy-wpv4w" [c80663df-d0c2-41dd-a3ec-f4d6652536c8] Running
	I1027 20:00:15.685738  452092 system_pods.go:89] "kube-scheduler-no-preload-300878" [6c301a1e-1d96-4826-b2c9-eb84d99e6a46] Running
	I1027 20:00:15.685745  452092 system_pods.go:89] "storage-provisioner" [54924ba0-8604-4333-8e8f-45bac06fffde] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 20:00:15.685761  452092 retry.go:31] will retry after 598.25301ms: missing components: kube-dns
	I1027 20:00:14.239863  456180 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.658761576s
	I1027 20:00:16.083636  456180 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.503678718s
	I1027 20:00:16.325151  456180 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.745295935s
	I1027 20:00:16.372501  456180 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 20:00:16.398062  456180 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 20:00:16.433499  456180 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 20:00:16.433950  456180 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-629838 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 20:00:16.454084  456180 kubeadm.go:318] [bootstrap-token] Using token: jxn6e6.qtxld1y6x069iqg9
	I1027 20:00:16.457103  456180 out.go:252]   - Configuring RBAC rules ...
	I1027 20:00:16.457240  456180 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 20:00:16.479195  456180 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 20:00:16.507443  456180 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 20:00:16.523375  456180 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 20:00:16.536892  456180 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 20:00:16.547628  456180 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 20:00:16.735313  456180 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 20:00:17.165305  456180 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1027 20:00:17.732278  456180 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1027 20:00:17.734040  456180 kubeadm.go:318] 
	I1027 20:00:17.734124  456180 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1027 20:00:17.734130  456180 kubeadm.go:318] 
	I1027 20:00:17.734211  456180 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1027 20:00:17.734216  456180 kubeadm.go:318] 
	I1027 20:00:17.734242  456180 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1027 20:00:17.734318  456180 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 20:00:17.734371  456180 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 20:00:17.734376  456180 kubeadm.go:318] 
	I1027 20:00:17.734432  456180 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1027 20:00:17.734437  456180 kubeadm.go:318] 
	I1027 20:00:17.734486  456180 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 20:00:17.734491  456180 kubeadm.go:318] 
	I1027 20:00:17.734545  456180 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1027 20:00:17.734646  456180 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 20:00:17.734719  456180 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 20:00:17.734723  456180 kubeadm.go:318] 
	I1027 20:00:17.734810  456180 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 20:00:17.734890  456180 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1027 20:00:17.734894  456180 kubeadm.go:318] 
	I1027 20:00:17.735009  456180 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token jxn6e6.qtxld1y6x069iqg9 \
	I1027 20:00:17.735117  456180 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:9473f077605f6866eb08c8a712af7c56a0deb8dc2a907ec8cbb4b8ae396e8f1f \
	I1027 20:00:17.735138  456180 kubeadm.go:318] 	--control-plane 
	I1027 20:00:17.735143  456180 kubeadm.go:318] 
	I1027 20:00:17.735230  456180 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1027 20:00:17.735235  456180 kubeadm.go:318] 
	I1027 20:00:17.735320  456180 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token jxn6e6.qtxld1y6x069iqg9 \
	I1027 20:00:17.735426  456180 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:9473f077605f6866eb08c8a712af7c56a0deb8dc2a907ec8cbb4b8ae396e8f1f 
	I1027 20:00:17.738884  456180 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1027 20:00:17.739152  456180 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1027 20:00:17.739270  456180 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 20:00:17.739302  456180 cni.go:84] Creating CNI manager for ""
	I1027 20:00:17.739316  456180 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 20:00:17.742588  456180 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1027 20:00:16.288142  452092 system_pods.go:86] 8 kube-system pods found
	I1027 20:00:16.288179  452092 system_pods.go:89] "coredns-66bc5c9577-jlg4z" [9692f7a1-291c-4c66-abc3-e0c78f66bc4c] Running
	I1027 20:00:16.288186  452092 system_pods.go:89] "etcd-no-preload-300878" [e8f1c131-e6a0-47af-8491-bbdfbd04eb72] Running
	I1027 20:00:16.288190  452092 system_pods.go:89] "kindnet-smnp2" [cc388f93-6d32-42d4-b690-08e5713d67c1] Running
	I1027 20:00:16.288195  452092 system_pods.go:89] "kube-apiserver-no-preload-300878" [4e5fb347-6c19-4da9-b4df-26f45bf1d362] Running
	I1027 20:00:16.288200  452092 system_pods.go:89] "kube-controller-manager-no-preload-300878" [103a9dbe-a475-46c9-8450-1708dc7bded1] Running
	I1027 20:00:16.288204  452092 system_pods.go:89] "kube-proxy-wpv4w" [c80663df-d0c2-41dd-a3ec-f4d6652536c8] Running
	I1027 20:00:16.288208  452092 system_pods.go:89] "kube-scheduler-no-preload-300878" [6c301a1e-1d96-4826-b2c9-eb84d99e6a46] Running
	I1027 20:00:16.288212  452092 system_pods.go:89] "storage-provisioner" [54924ba0-8604-4333-8e8f-45bac06fffde] Running
	I1027 20:00:16.288220  452092 system_pods.go:126] duration metric: took 1.761526546s to wait for k8s-apps to be running ...
	I1027 20:00:16.288230  452092 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 20:00:16.288285  452092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 20:00:16.305494  452092 system_svc.go:56] duration metric: took 17.252068ms WaitForService to wait for kubelet
	I1027 20:00:16.305518  452092 kubeadm.go:586] duration metric: took 19.206720603s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 20:00:16.305536  452092 node_conditions.go:102] verifying NodePressure condition ...
	I1027 20:00:16.308648  452092 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 20:00:16.308731  452092 node_conditions.go:123] node cpu capacity is 2
	I1027 20:00:16.308758  452092 node_conditions.go:105] duration metric: took 3.215696ms to run NodePressure ...
	I1027 20:00:16.308798  452092 start.go:241] waiting for startup goroutines ...
	I1027 20:00:16.308824  452092 start.go:246] waiting for cluster config update ...
	I1027 20:00:16.308850  452092 start.go:255] writing updated cluster config ...
	I1027 20:00:16.309162  452092 ssh_runner.go:195] Run: rm -f paused
	I1027 20:00:16.313743  452092 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 20:00:16.317387  452092 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jlg4z" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:00:16.322306  452092 pod_ready.go:94] pod "coredns-66bc5c9577-jlg4z" is "Ready"
	I1027 20:00:16.322378  452092 pod_ready.go:86] duration metric: took 4.970174ms for pod "coredns-66bc5c9577-jlg4z" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:00:16.328543  452092 pod_ready.go:83] waiting for pod "etcd-no-preload-300878" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:00:16.334764  452092 pod_ready.go:94] pod "etcd-no-preload-300878" is "Ready"
	I1027 20:00:16.334795  452092 pod_ready.go:86] duration metric: took 6.178402ms for pod "etcd-no-preload-300878" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:00:16.337841  452092 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-300878" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:00:16.343292  452092 pod_ready.go:94] pod "kube-apiserver-no-preload-300878" is "Ready"
	I1027 20:00:16.343321  452092 pod_ready.go:86] duration metric: took 5.451842ms for pod "kube-apiserver-no-preload-300878" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:00:16.345999  452092 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-300878" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:00:16.717262  452092 pod_ready.go:94] pod "kube-controller-manager-no-preload-300878" is "Ready"
	I1027 20:00:16.717288  452092 pod_ready.go:86] duration metric: took 371.261242ms for pod "kube-controller-manager-no-preload-300878" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:00:16.917364  452092 pod_ready.go:83] waiting for pod "kube-proxy-wpv4w" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:00:17.318332  452092 pod_ready.go:94] pod "kube-proxy-wpv4w" is "Ready"
	I1027 20:00:17.318360  452092 pod_ready.go:86] duration metric: took 400.966088ms for pod "kube-proxy-wpv4w" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:00:17.517541  452092 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-300878" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:00:17.918005  452092 pod_ready.go:94] pod "kube-scheduler-no-preload-300878" is "Ready"
	I1027 20:00:17.918029  452092 pod_ready.go:86] duration metric: took 400.460182ms for pod "kube-scheduler-no-preload-300878" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:00:17.918041  452092 pod_ready.go:40] duration metric: took 1.604269583s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 20:00:18.002602  452092 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 20:00:18.012090  452092 out.go:179] * Done! kubectl is now configured to use "no-preload-300878" cluster and "default" namespace by default
	I1027 20:00:17.745577  456180 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1027 20:00:17.749742  456180 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 20:00:17.749766  456180 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1027 20:00:17.764561  456180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 20:00:18.358806  456180 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 20:00:18.358949  456180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:00:18.359065  456180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-629838 minikube.k8s.io/updated_at=2025_10_27T20_00_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f minikube.k8s.io/name=embed-certs-629838 minikube.k8s.io/primary=true
	I1027 20:00:18.557686  456180 ops.go:34] apiserver oom_adj: -16
	I1027 20:00:18.557827  456180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:00:19.058217  456180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:00:19.558349  456180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:00:20.058018  456180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:00:20.558597  456180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:00:21.058849  456180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:00:21.558778  456180 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:00:21.657617  456180 kubeadm.go:1113] duration metric: took 3.29869788s to wait for elevateKubeSystemPrivileges
	I1027 20:00:21.657644  456180 kubeadm.go:402] duration metric: took 21.768241349s to StartCluster
	I1027 20:00:21.657662  456180 settings.go:142] acquiring lock: {Name:mk49b9e2e9b7901c6b51e89b8b1e8ee7f9e88107 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:00:21.657725  456180 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 20:00:21.660791  456180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/kubeconfig: {Name:mk0a03a594f8d9f9199b1c7cff7c550cec8414c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:00:21.661085  456180 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 20:00:21.661115  456180 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 20:00:21.661382  456180 config.go:182] Loaded profile config "embed-certs-629838": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:00:21.661419  456180 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 20:00:21.661484  456180 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-629838"
	I1027 20:00:21.661499  456180 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-629838"
	I1027 20:00:21.661521  456180 host.go:66] Checking if "embed-certs-629838" exists ...
	I1027 20:00:21.661975  456180 cli_runner.go:164] Run: docker container inspect embed-certs-629838 --format={{.State.Status}}
	I1027 20:00:21.662474  456180 addons.go:69] Setting default-storageclass=true in profile "embed-certs-629838"
	I1027 20:00:21.662505  456180 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-629838"
	I1027 20:00:21.662787  456180 cli_runner.go:164] Run: docker container inspect embed-certs-629838 --format={{.State.Status}}
	I1027 20:00:21.666828  456180 out.go:179] * Verifying Kubernetes components...
	I1027 20:00:21.671894  456180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:00:21.712820  456180 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 20:00:21.713740  456180 addons.go:238] Setting addon default-storageclass=true in "embed-certs-629838"
	I1027 20:00:21.714724  456180 host.go:66] Checking if "embed-certs-629838" exists ...
	I1027 20:00:21.715488  456180 cli_runner.go:164] Run: docker container inspect embed-certs-629838 --format={{.State.Status}}
	I1027 20:00:21.715823  456180 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 20:00:21.715839  456180 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 20:00:21.715900  456180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629838
	I1027 20:00:21.748886  456180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/embed-certs-629838/id_rsa Username:docker}
	I1027 20:00:21.763199  456180 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 20:00:21.763226  456180 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 20:00:21.763301  456180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629838
	I1027 20:00:21.792192  456180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/embed-certs-629838/id_rsa Username:docker}
	I1027 20:00:21.968161  456180 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 20:00:21.982165  456180 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 20:00:22.056606  456180 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 20:00:22.127547  456180 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 20:00:22.638206  456180 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1027 20:00:22.640689  456180 node_ready.go:35] waiting up to 6m0s for node "embed-certs-629838" to be "Ready" ...
	I1027 20:00:22.928466  456180 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1027 20:00:22.931396  456180 addons.go:514] duration metric: took 1.269956225s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1027 20:00:23.146739  456180 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-629838" context rescaled to 1 replicas
	W1027 20:00:24.643668  456180 node_ready.go:57] node "embed-certs-629838" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 27 20:00:14 no-preload-300878 crio[838]: time="2025-10-27T20:00:14.86269856Z" level=info msg="Created container f14f653e74f01240f4bb48d8b5ce42f9954fd5cdee4f3fd328e41aec02b1f7bf: kube-system/coredns-66bc5c9577-jlg4z/coredns" id=862197b3-dda2-498a-a9d9-f35afdb35fa8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 20:00:14 no-preload-300878 crio[838]: time="2025-10-27T20:00:14.86378456Z" level=info msg="Starting container: f14f653e74f01240f4bb48d8b5ce42f9954fd5cdee4f3fd328e41aec02b1f7bf" id=f1be8166-6a58-4bf5-b0c4-4945d4814023 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 20:00:14 no-preload-300878 crio[838]: time="2025-10-27T20:00:14.871868952Z" level=info msg="Started container" PID=2494 containerID=f14f653e74f01240f4bb48d8b5ce42f9954fd5cdee4f3fd328e41aec02b1f7bf description=kube-system/coredns-66bc5c9577-jlg4z/coredns id=f1be8166-6a58-4bf5-b0c4-4945d4814023 name=/runtime.v1.RuntimeService/StartContainer sandboxID=44b4665c60f1b80751f35a31ac5caa9103df7d8f51707ab4f13c68e678d90647
	Oct 27 20:00:18 no-preload-300878 crio[838]: time="2025-10-27T20:00:18.62067691Z" level=info msg="Running pod sandbox: default/busybox/POD" id=dee90522-17dc-4783-8c2a-5ec3e147530e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 20:00:18 no-preload-300878 crio[838]: time="2025-10-27T20:00:18.62075041Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:00:18 no-preload-300878 crio[838]: time="2025-10-27T20:00:18.632519687Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:8febf8ce614d8a8db0ab1386f1617bfe7a053ab04c972064434249f253974fcb UID:6e0e7212-11a8-40cb-8e65-ee62a4a0c520 NetNS:/var/run/netns/67fc2066-91bb-43eb-9e77-689d0fb188b9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400150a598}] Aliases:map[]}"
	Oct 27 20:00:18 no-preload-300878 crio[838]: time="2025-10-27T20:00:18.632558644Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 27 20:00:18 no-preload-300878 crio[838]: time="2025-10-27T20:00:18.647795507Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:8febf8ce614d8a8db0ab1386f1617bfe7a053ab04c972064434249f253974fcb UID:6e0e7212-11a8-40cb-8e65-ee62a4a0c520 NetNS:/var/run/netns/67fc2066-91bb-43eb-9e77-689d0fb188b9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400150a598}] Aliases:map[]}"
	Oct 27 20:00:18 no-preload-300878 crio[838]: time="2025-10-27T20:00:18.647953049Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 27 20:00:18 no-preload-300878 crio[838]: time="2025-10-27T20:00:18.651701769Z" level=info msg="Ran pod sandbox 8febf8ce614d8a8db0ab1386f1617bfe7a053ab04c972064434249f253974fcb with infra container: default/busybox/POD" id=dee90522-17dc-4783-8c2a-5ec3e147530e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 20:00:18 no-preload-300878 crio[838]: time="2025-10-27T20:00:18.65278182Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b5fed17b-3da4-44d6-8f54-94748ad0905d name=/runtime.v1.ImageService/ImageStatus
	Oct 27 20:00:18 no-preload-300878 crio[838]: time="2025-10-27T20:00:18.653124383Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=b5fed17b-3da4-44d6-8f54-94748ad0905d name=/runtime.v1.ImageService/ImageStatus
	Oct 27 20:00:18 no-preload-300878 crio[838]: time="2025-10-27T20:00:18.653195872Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=b5fed17b-3da4-44d6-8f54-94748ad0905d name=/runtime.v1.ImageService/ImageStatus
	Oct 27 20:00:18 no-preload-300878 crio[838]: time="2025-10-27T20:00:18.65606126Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9fa4b5ee-37f0-4412-8c9f-72d5d159a4e1 name=/runtime.v1.ImageService/PullImage
	Oct 27 20:00:18 no-preload-300878 crio[838]: time="2025-10-27T20:00:18.657439667Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 27 20:00:20 no-preload-300878 crio[838]: time="2025-10-27T20:00:20.719418272Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=9fa4b5ee-37f0-4412-8c9f-72d5d159a4e1 name=/runtime.v1.ImageService/PullImage
	Oct 27 20:00:20 no-preload-300878 crio[838]: time="2025-10-27T20:00:20.7204855Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7828f42e-dbb1-419b-a284-056e23d6ec3c name=/runtime.v1.ImageService/ImageStatus
	Oct 27 20:00:20 no-preload-300878 crio[838]: time="2025-10-27T20:00:20.72360181Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ab340a54-a752-4a73-b5f8-c16c2083b9f2 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 20:00:20 no-preload-300878 crio[838]: time="2025-10-27T20:00:20.729388027Z" level=info msg="Creating container: default/busybox/busybox" id=a163ea17-3715-430e-a5df-390b25d54c82 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 20:00:20 no-preload-300878 crio[838]: time="2025-10-27T20:00:20.729521898Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:00:20 no-preload-300878 crio[838]: time="2025-10-27T20:00:20.735324368Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:00:20 no-preload-300878 crio[838]: time="2025-10-27T20:00:20.735821405Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:00:20 no-preload-300878 crio[838]: time="2025-10-27T20:00:20.752302106Z" level=info msg="Created container 3f14cc0b7e00e7ba21e0a3bb02e4da79bfb9dfe5003b45afc42cbac45cd4ebd6: default/busybox/busybox" id=a163ea17-3715-430e-a5df-390b25d54c82 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 20:00:20 no-preload-300878 crio[838]: time="2025-10-27T20:00:20.754206028Z" level=info msg="Starting container: 3f14cc0b7e00e7ba21e0a3bb02e4da79bfb9dfe5003b45afc42cbac45cd4ebd6" id=9e6157d2-23cc-4fb0-9ea6-2ec4f7aa4f74 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 20:00:20 no-preload-300878 crio[838]: time="2025-10-27T20:00:20.757568616Z" level=info msg="Started container" PID=2552 containerID=3f14cc0b7e00e7ba21e0a3bb02e4da79bfb9dfe5003b45afc42cbac45cd4ebd6 description=default/busybox/busybox id=9e6157d2-23cc-4fb0-9ea6-2ec4f7aa4f74 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8febf8ce614d8a8db0ab1386f1617bfe7a053ab04c972064434249f253974fcb
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	3f14cc0b7e00e       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   9 seconds ago       Running             busybox                   0                   8febf8ce614d8       busybox                                     default
	f14f653e74f01       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      15 seconds ago      Running             coredns                   0                   44b4665c60f1b       coredns-66bc5c9577-jlg4z                    kube-system
	eaebf7a82f27e       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      15 seconds ago      Running             storage-provisioner       0                   ab817e6fd10ee       storage-provisioner                         kube-system
	d56484aa324c1       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    26 seconds ago      Running             kindnet-cni               0                   2098d4431c357       kindnet-smnp2                               kube-system
	837b3877d3e90       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      31 seconds ago      Running             kube-proxy                0                   77245af64a290       kube-proxy-wpv4w                            kube-system
	ca9f5b436876c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      46 seconds ago      Running             kube-scheduler            0                   bd7c2dd98634f       kube-scheduler-no-preload-300878            kube-system
	9566f20304e19       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      46 seconds ago      Running             kube-controller-manager   0                   7cbbfece2c43d       kube-controller-manager-no-preload-300878   kube-system
	9828003f2b1fb       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      46 seconds ago      Running             kube-apiserver            0                   96bb14b397a11       kube-apiserver-no-preload-300878            kube-system
	b680e922e1995       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      46 seconds ago      Running             etcd                      0                   28755b6bb16d1       etcd-no-preload-300878                      kube-system
	
	
	==> coredns [f14f653e74f01240f4bb48d8b5ce42f9954fd5cdee4f3fd328e41aec02b1f7bf] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52489 - 52432 "HINFO IN 2670036948154825550.8744421413249635528. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.033984783s
	
	
	==> describe nodes <==
	Name:               no-preload-300878
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-300878
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=no-preload-300878
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T19_59_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 19:59:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-300878
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 20:00:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 20:00:22 +0000   Mon, 27 Oct 2025 19:59:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 20:00:22 +0000   Mon, 27 Oct 2025 19:59:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 20:00:22 +0000   Mon, 27 Oct 2025 19:59:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 20:00:22 +0000   Mon, 27 Oct 2025 20:00:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-300878
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                efc50928-8e8e-470b-97b1-2b65f64ae45b
	  Boot ID:                    23ea05b4-8203-4fb9-a84a-5deb4b091cbb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-jlg4z                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     33s
	  kube-system                 etcd-no-preload-300878                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         41s
	  kube-system                 kindnet-smnp2                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      33s
	  kube-system                 kube-apiserver-no-preload-300878             250m (12%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-no-preload-300878    200m (10%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-proxy-wpv4w                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-scheduler-no-preload-300878             100m (5%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 30s                kube-proxy       
	  Normal   NodeHasSufficientMemory  48s (x8 over 48s)  kubelet          Node no-preload-300878 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    48s (x8 over 48s)  kubelet          Node no-preload-300878 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     48s (x8 over 48s)  kubelet          Node no-preload-300878 status is now: NodeHasSufficientPID
	  Normal   Starting                 39s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 39s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  39s                kubelet          Node no-preload-300878 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    39s                kubelet          Node no-preload-300878 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     39s                kubelet          Node no-preload-300878 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           34s                node-controller  Node no-preload-300878 event: Registered Node no-preload-300878 in Controller
	  Normal   NodeReady                16s                kubelet          Node no-preload-300878 status is now: NodeReady
	
	
	==> dmesg <==
	[ +33.986700] overlayfs: idmapped layers are currently not supported
	[Oct27 19:36] overlayfs: idmapped layers are currently not supported
	[Oct27 19:37] overlayfs: idmapped layers are currently not supported
	[Oct27 19:38] overlayfs: idmapped layers are currently not supported
	[Oct27 19:39] overlayfs: idmapped layers are currently not supported
	[Oct27 19:40] overlayfs: idmapped layers are currently not supported
	[  +7.015891] overlayfs: idmapped layers are currently not supported
	[Oct27 19:41] overlayfs: idmapped layers are currently not supported
	[ +28.455216] overlayfs: idmapped layers are currently not supported
	[Oct27 19:42] overlayfs: idmapped layers are currently not supported
	[ +26.490174] overlayfs: idmapped layers are currently not supported
	[Oct27 19:44] overlayfs: idmapped layers are currently not supported
	[Oct27 19:45] overlayfs: idmapped layers are currently not supported
	[Oct27 19:47] overlayfs: idmapped layers are currently not supported
	[Oct27 19:49] overlayfs: idmapped layers are currently not supported
	[ +31.410335] overlayfs: idmapped layers are currently not supported
	[Oct27 19:51] overlayfs: idmapped layers are currently not supported
	[Oct27 19:53] overlayfs: idmapped layers are currently not supported
	[Oct27 19:54] overlayfs: idmapped layers are currently not supported
	[Oct27 19:55] overlayfs: idmapped layers are currently not supported
	[Oct27 19:56] overlayfs: idmapped layers are currently not supported
	[Oct27 19:57] overlayfs: idmapped layers are currently not supported
	[Oct27 19:58] overlayfs: idmapped layers are currently not supported
	[Oct27 19:59] overlayfs: idmapped layers are currently not supported
	[Oct27 20:00] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b680e922e1995f49ef1c58102f27fadbea191c1f79f2f7c72b65104d0e4f63d0] <==
	{"level":"warn","ts":"2025-10-27T19:59:46.340092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:59:46.359851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:59:46.370450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:59:46.390780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:59:46.411237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:59:46.427934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:59:46.440632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:59:46.455492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:59:46.474087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:59:46.511784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:59:46.524058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:59:46.540946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:59:46.565281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:59:46.581744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:59:46.592492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:59:46.614716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:59:46.632468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:59:46.646007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:59:46.659178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:59:46.679108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:59:46.703423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:59:46.738706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:59:46.774264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:59:46.783091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:59:46.877603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49298","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:00:30 up  2:43,  0 user,  load average: 3.20, 3.05, 2.61
	Linux no-preload-300878 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d56484aa324c11f1eb5ffb4ba0364a1c72e13f2020d593f359b33e1a4a0cf139] <==
	I1027 20:00:03.926936       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 20:00:04.016594       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1027 20:00:04.016764       1 main.go:148] setting mtu 1500 for CNI 
	I1027 20:00:04.016818       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 20:00:04.016859       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T20:00:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 20:00:04.222544       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 20:00:04.222646       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 20:00:04.222709       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 20:00:04.224674       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1027 20:00:04.422821       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 20:00:04.422905       1 metrics.go:72] Registering metrics
	I1027 20:00:04.423017       1 controller.go:711] "Syncing nftables rules"
	I1027 20:00:14.228997       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 20:00:14.229051       1 main.go:301] handling current node
	I1027 20:00:24.223062       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 20:00:24.223102       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9828003f2b1fb298c12796070e011bc41db96434eb68014b4045b31b59a27997] <==
	I1027 19:59:48.675226       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1027 19:59:48.675874       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1027 19:59:48.675993       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1027 19:59:48.677411       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 19:59:48.694107       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1027 19:59:48.708459       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 19:59:48.709826       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 19:59:49.044921       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1027 19:59:49.054943       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1027 19:59:49.054973       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 19:59:50.207355       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 19:59:50.271787       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 19:59:50.375881       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1027 19:59:50.382705       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1027 19:59:50.383821       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 19:59:50.388506       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 19:59:51.178700       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 19:59:51.449141       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 19:59:51.483060       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1027 19:59:51.497269       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1027 19:59:57.089219       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 19:59:57.141553       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 19:59:57.148772       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 19:59:57.227091       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1027 20:00:28.437514       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:51210: use of closed network connection
	
	
	==> kube-controller-manager [9566f20304e1942ff329631bf23011dd71574cc9e72b2acb5fe72b09636a11c8] <==
	I1027 19:59:56.318793       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1027 19:59:56.319890       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1027 19:59:56.319941       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 19:59:56.320005       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 19:59:56.320083       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-300878"
	I1027 19:59:56.320123       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1027 19:59:56.320301       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1027 19:59:56.321377       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 19:59:56.321560       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1027 19:59:56.321602       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 19:59:56.321628       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1027 19:59:56.321811       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 19:59:56.335255       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 19:59:56.335471       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1027 19:59:56.335562       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 19:59:56.336088       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 19:59:56.347629       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 19:59:56.359079       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1027 19:59:56.387058       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 19:59:56.387364       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 19:59:56.387409       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 19:59:56.387440       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1027 19:59:56.387543       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 19:59:56.387636       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 20:00:16.323274       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [837b3877d3e90dc8d054820410c034e92177e0b50e2d3acfe8653bc56c05536c] <==
	I1027 19:59:58.532289       1 server_linux.go:53] "Using iptables proxy"
	I1027 19:59:58.874280       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 19:59:58.974696       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 19:59:58.974744       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1027 19:59:58.974845       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 19:59:59.184645       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 19:59:59.184701       1 server_linux.go:132] "Using iptables Proxier"
	I1027 19:59:59.192830       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 19:59:59.193112       1 server.go:527] "Version info" version="v1.34.1"
	I1027 19:59:59.193124       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:59:59.194413       1 config.go:200] "Starting service config controller"
	I1027 19:59:59.194422       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 19:59:59.194437       1 config.go:106] "Starting endpoint slice config controller"
	I1027 19:59:59.194440       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 19:59:59.194457       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 19:59:59.194461       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 19:59:59.200227       1 config.go:309] "Starting node config controller"
	I1027 19:59:59.200242       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 19:59:59.200250       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 19:59:59.296518       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 19:59:59.296562       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 19:59:59.296596       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ca9f5b436876c2569026f778f678fb970dd9b0a31f55750028b515124d3fce66] <==
	I1027 19:59:49.621710       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:59:49.631308       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:59:49.631925       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 19:59:49.632021       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 19:59:49.632584       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1027 19:59:49.647423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1027 19:59:49.648030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 19:59:49.648143       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 19:59:49.648188       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 19:59:49.648228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 19:59:49.648260       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 19:59:49.648294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 19:59:49.648325       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 19:59:49.648363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 19:59:49.658523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 19:59:49.664303       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 19:59:49.664547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 19:59:49.667159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1027 19:59:49.667222       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 19:59:49.667302       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 19:59:49.667347       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 19:59:49.667394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 19:59:49.667512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 19:59:49.667578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1027 19:59:50.833817       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 19:59:52 no-preload-300878 kubelet[2015]: I1027 19:59:52.675755    2015 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-300878" podStartSLOduration=1.675734514 podStartE2EDuration="1.675734514s" podCreationTimestamp="2025-10-27 19:59:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:59:52.644967742 +0000 UTC m=+1.346050364" watchObservedRunningTime="2025-10-27 19:59:52.675734514 +0000 UTC m=+1.376817128"
	Oct 27 19:59:56 no-preload-300878 kubelet[2015]: I1027 19:59:56.346435    2015 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 27 19:59:56 no-preload-300878 kubelet[2015]: I1027 19:59:56.347310    2015 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 27 19:59:57 no-preload-300878 kubelet[2015]: I1027 19:59:57.600830    2015 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cc388f93-6d32-42d4-b690-08e5713d67c1-lib-modules\") pod \"kindnet-smnp2\" (UID: \"cc388f93-6d32-42d4-b690-08e5713d67c1\") " pod="kube-system/kindnet-smnp2"
	Oct 27 19:59:57 no-preload-300878 kubelet[2015]: I1027 19:59:57.600962    2015 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dwdp\" (UniqueName: \"kubernetes.io/projected/cc388f93-6d32-42d4-b690-08e5713d67c1-kube-api-access-7dwdp\") pod \"kindnet-smnp2\" (UID: \"cc388f93-6d32-42d4-b690-08e5713d67c1\") " pod="kube-system/kindnet-smnp2"
	Oct 27 19:59:57 no-preload-300878 kubelet[2015]: I1027 19:59:57.601095    2015 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/cc388f93-6d32-42d4-b690-08e5713d67c1-cni-cfg\") pod \"kindnet-smnp2\" (UID: \"cc388f93-6d32-42d4-b690-08e5713d67c1\") " pod="kube-system/kindnet-smnp2"
	Oct 27 19:59:57 no-preload-300878 kubelet[2015]: I1027 19:59:57.601123    2015 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cc388f93-6d32-42d4-b690-08e5713d67c1-xtables-lock\") pod \"kindnet-smnp2\" (UID: \"cc388f93-6d32-42d4-b690-08e5713d67c1\") " pod="kube-system/kindnet-smnp2"
	Oct 27 19:59:57 no-preload-300878 kubelet[2015]: I1027 19:59:57.701983    2015 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c80663df-d0c2-41dd-a3ec-f4d6652536c8-lib-modules\") pod \"kube-proxy-wpv4w\" (UID: \"c80663df-d0c2-41dd-a3ec-f4d6652536c8\") " pod="kube-system/kube-proxy-wpv4w"
	Oct 27 19:59:57 no-preload-300878 kubelet[2015]: I1027 19:59:57.702084    2015 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c80663df-d0c2-41dd-a3ec-f4d6652536c8-xtables-lock\") pod \"kube-proxy-wpv4w\" (UID: \"c80663df-d0c2-41dd-a3ec-f4d6652536c8\") " pod="kube-system/kube-proxy-wpv4w"
	Oct 27 19:59:57 no-preload-300878 kubelet[2015]: I1027 19:59:57.702149    2015 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c80663df-d0c2-41dd-a3ec-f4d6652536c8-kube-proxy\") pod \"kube-proxy-wpv4w\" (UID: \"c80663df-d0c2-41dd-a3ec-f4d6652536c8\") " pod="kube-system/kube-proxy-wpv4w"
	Oct 27 19:59:57 no-preload-300878 kubelet[2015]: I1027 19:59:57.702219    2015 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7xxw\" (UniqueName: \"kubernetes.io/projected/c80663df-d0c2-41dd-a3ec-f4d6652536c8-kube-api-access-x7xxw\") pod \"kube-proxy-wpv4w\" (UID: \"c80663df-d0c2-41dd-a3ec-f4d6652536c8\") " pod="kube-system/kube-proxy-wpv4w"
	Oct 27 19:59:57 no-preload-300878 kubelet[2015]: I1027 19:59:57.885951    2015 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 27 19:59:59 no-preload-300878 kubelet[2015]: I1027 19:59:59.161588    2015 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wpv4w" podStartSLOduration=2.161552595 podStartE2EDuration="2.161552595s" podCreationTimestamp="2025-10-27 19:59:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:59:58.644281131 +0000 UTC m=+7.345363753" watchObservedRunningTime="2025-10-27 19:59:59.161552595 +0000 UTC m=+7.862635217"
	Oct 27 20:00:14 no-preload-300878 kubelet[2015]: I1027 20:00:14.388492    2015 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 27 20:00:14 no-preload-300878 kubelet[2015]: I1027 20:00:14.424516    2015 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-smnp2" podStartSLOduration=11.761996446 podStartE2EDuration="17.424488762s" podCreationTimestamp="2025-10-27 19:59:57 +0000 UTC" firstStartedPulling="2025-10-27 19:59:58.158636936 +0000 UTC m=+6.859719558" lastFinishedPulling="2025-10-27 20:00:03.821129252 +0000 UTC m=+12.522211874" observedRunningTime="2025-10-27 20:00:04.662308631 +0000 UTC m=+13.363391261" watchObservedRunningTime="2025-10-27 20:00:14.424488762 +0000 UTC m=+23.125571384"
	Oct 27 20:00:14 no-preload-300878 kubelet[2015]: I1027 20:00:14.495946    2015 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/54924ba0-8604-4333-8e8f-45bac06fffde-tmp\") pod \"storage-provisioner\" (UID: \"54924ba0-8604-4333-8e8f-45bac06fffde\") " pod="kube-system/storage-provisioner"
	Oct 27 20:00:14 no-preload-300878 kubelet[2015]: I1027 20:00:14.496271    2015 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6z49\" (UniqueName: \"kubernetes.io/projected/54924ba0-8604-4333-8e8f-45bac06fffde-kube-api-access-b6z49\") pod \"storage-provisioner\" (UID: \"54924ba0-8604-4333-8e8f-45bac06fffde\") " pod="kube-system/storage-provisioner"
	Oct 27 20:00:14 no-preload-300878 kubelet[2015]: I1027 20:00:14.597455    2015 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzdp2\" (UniqueName: \"kubernetes.io/projected/9692f7a1-291c-4c66-abc3-e0c78f66bc4c-kube-api-access-kzdp2\") pod \"coredns-66bc5c9577-jlg4z\" (UID: \"9692f7a1-291c-4c66-abc3-e0c78f66bc4c\") " pod="kube-system/coredns-66bc5c9577-jlg4z"
	Oct 27 20:00:14 no-preload-300878 kubelet[2015]: I1027 20:00:14.597658    2015 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9692f7a1-291c-4c66-abc3-e0c78f66bc4c-config-volume\") pod \"coredns-66bc5c9577-jlg4z\" (UID: \"9692f7a1-291c-4c66-abc3-e0c78f66bc4c\") " pod="kube-system/coredns-66bc5c9577-jlg4z"
	Oct 27 20:00:14 no-preload-300878 kubelet[2015]: W1027 20:00:14.742681    2015 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5f7533431bd6ad05927fc6ee4ffbe127ba4e775919d072603a24e3cd4e1d5d89/crio-ab817e6fd10eec8632bb3d47df87a23c901bc4afd90530e9a3625934b3fac089 WatchSource:0}: Error finding container ab817e6fd10eec8632bb3d47df87a23c901bc4afd90530e9a3625934b3fac089: Status 404 returned error can't find the container with id ab817e6fd10eec8632bb3d47df87a23c901bc4afd90530e9a3625934b3fac089
	Oct 27 20:00:14 no-preload-300878 kubelet[2015]: W1027 20:00:14.793212    2015 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5f7533431bd6ad05927fc6ee4ffbe127ba4e775919d072603a24e3cd4e1d5d89/crio-44b4665c60f1b80751f35a31ac5caa9103df7d8f51707ab4f13c68e678d90647 WatchSource:0}: Error finding container 44b4665c60f1b80751f35a31ac5caa9103df7d8f51707ab4f13c68e678d90647: Status 404 returned error can't find the container with id 44b4665c60f1b80751f35a31ac5caa9103df7d8f51707ab4f13c68e678d90647
	Oct 27 20:00:15 no-preload-300878 kubelet[2015]: I1027 20:00:15.709637    2015 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.709618924 podStartE2EDuration="15.709618924s" podCreationTimestamp="2025-10-27 20:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 20:00:15.68964534 +0000 UTC m=+24.390727962" watchObservedRunningTime="2025-10-27 20:00:15.709618924 +0000 UTC m=+24.410701546"
	Oct 27 20:00:18 no-preload-300878 kubelet[2015]: I1027 20:00:18.310275    2015 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-jlg4z" podStartSLOduration=21.310257238 podStartE2EDuration="21.310257238s" podCreationTimestamp="2025-10-27 19:59:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 20:00:15.714245813 +0000 UTC m=+24.415328435" watchObservedRunningTime="2025-10-27 20:00:18.310257238 +0000 UTC m=+27.011339860"
	Oct 27 20:00:18 no-preload-300878 kubelet[2015]: I1027 20:00:18.423593    2015 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bncl2\" (UniqueName: \"kubernetes.io/projected/6e0e7212-11a8-40cb-8e65-ee62a4a0c520-kube-api-access-bncl2\") pod \"busybox\" (UID: \"6e0e7212-11a8-40cb-8e65-ee62a4a0c520\") " pod="default/busybox"
	Oct 27 20:00:18 no-preload-300878 kubelet[2015]: W1027 20:00:18.649772    2015 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5f7533431bd6ad05927fc6ee4ffbe127ba4e775919d072603a24e3cd4e1d5d89/crio-8febf8ce614d8a8db0ab1386f1617bfe7a053ab04c972064434249f253974fcb WatchSource:0}: Error finding container 8febf8ce614d8a8db0ab1386f1617bfe7a053ab04c972064434249f253974fcb: Status 404 returned error can't find the container with id 8febf8ce614d8a8db0ab1386f1617bfe7a053ab04c972064434249f253974fcb
	
	
	==> storage-provisioner [eaebf7a82f27e10e6883666db4f8ce07f15960614a9f8e396c4e48ad12ef89b5] <==
	I1027 20:00:14.892429       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1027 20:00:14.956522       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 20:00:14.956719       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1027 20:00:14.959497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:00:14.966155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 20:00:14.966375       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 20:00:14.967448       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-300878_5cbdf49e-2cec-4717-95e6-5f05a7298525!
	I1027 20:00:14.968208       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3c1092eb-a7a2-455f-b121-d7c4d1adde3a", APIVersion:"v1", ResourceVersion:"452", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-300878_5cbdf49e-2cec-4717-95e6-5f05a7298525 became leader
	W1027 20:00:14.975499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:00:14.983343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 20:00:15.068911       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-300878_5cbdf49e-2cec-4717-95e6-5f05a7298525!
	W1027 20:00:16.989719       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:00:16.995041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:00:18.998079       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:00:19.005676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:00:21.009122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:00:21.019908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:00:23.023572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:00:23.028415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:00:25.033316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:00:25.040366       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:00:27.044246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:00:27.048925       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:00:29.051811       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:00:29.056452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-300878 -n no-preload-300878
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-300878 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-629838 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-629838 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (275.115706ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T20:01:14Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-629838 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-629838 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-629838 describe deploy/metrics-server -n kube-system: exit status 1 (99.572918ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-629838 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-629838
helpers_test.go:243: (dbg) docker inspect embed-certs-629838:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c4f57eb9d97cb78db15678f52105eae7c06964f89c91a987456d1c3d7cf90fa0",
	        "Created": "2025-10-27T19:59:47.181587162Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 456663,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T19:59:47.257386774Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/c4f57eb9d97cb78db15678f52105eae7c06964f89c91a987456d1c3d7cf90fa0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c4f57eb9d97cb78db15678f52105eae7c06964f89c91a987456d1c3d7cf90fa0/hostname",
	        "HostsPath": "/var/lib/docker/containers/c4f57eb9d97cb78db15678f52105eae7c06964f89c91a987456d1c3d7cf90fa0/hosts",
	        "LogPath": "/var/lib/docker/containers/c4f57eb9d97cb78db15678f52105eae7c06964f89c91a987456d1c3d7cf90fa0/c4f57eb9d97cb78db15678f52105eae7c06964f89c91a987456d1c3d7cf90fa0-json.log",
	        "Name": "/embed-certs-629838",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-629838:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-629838",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c4f57eb9d97cb78db15678f52105eae7c06964f89c91a987456d1c3d7cf90fa0",
	                "LowerDir": "/var/lib/docker/overlay2/11e03d0ec6d0fa1ee685eb208aa9a39493e58b461a663b0cbbf1e2b57f205fd2-init/diff:/var/lib/docker/overlay2/0567218bbe05e8b59b2aef7ad82032d6716040c1f9d9d73e7de8682101079f76/diff",
	                "MergedDir": "/var/lib/docker/overlay2/11e03d0ec6d0fa1ee685eb208aa9a39493e58b461a663b0cbbf1e2b57f205fd2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/11e03d0ec6d0fa1ee685eb208aa9a39493e58b461a663b0cbbf1e2b57f205fd2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/11e03d0ec6d0fa1ee685eb208aa9a39493e58b461a663b0cbbf1e2b57f205fd2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-629838",
	                "Source": "/var/lib/docker/volumes/embed-certs-629838/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-629838",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-629838",
	                "name.minikube.sigs.k8s.io": "embed-certs-629838",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e06de7ac7f3e71034103e276acf3cc13bad26567031313fe4d17209f12043b88",
	            "SandboxKey": "/var/run/docker/netns/e06de7ac7f3e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33423"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-629838": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:f0:6f:54:d0:19",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e984493782940b4e21ac1d18681d3b8ebbf5771aadf9508ab04a1597fbf530b4",
	                    "EndpointID": "cdeb45243b0ddb5f9f95fdaca3c43b5f2ccb8e267cf8c26ae2b66e7fe89266fb",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-629838",
	                        "c4f57eb9d97c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-629838 -n embed-certs-629838
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-629838 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-629838 logs -n 25: (1.207936594s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ delete  │ -p force-systemd-env-105360                                                                                                                                                                                                                   │ force-systemd-env-105360  │ jenkins │ v1.37.0 │ 27 Oct 25 19:55 UTC │ 27 Oct 25 19:55 UTC │
	│ start   │ -p cert-expiration-280013 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-280013    │ jenkins │ v1.37.0 │ 27 Oct 25 19:55 UTC │ 27 Oct 25 19:55 UTC │
	│ delete  │ -p kubernetes-upgrade-524430                                                                                                                                                                                                                  │ kubernetes-upgrade-524430 │ jenkins │ v1.37.0 │ 27 Oct 25 19:56 UTC │ 27 Oct 25 19:56 UTC │
	│ start   │ -p cert-options-319273 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-319273       │ jenkins │ v1.37.0 │ 27 Oct 25 19:56 UTC │ 27 Oct 25 19:56 UTC │
	│ ssh     │ cert-options-319273 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-319273       │ jenkins │ v1.37.0 │ 27 Oct 25 19:56 UTC │ 27 Oct 25 19:56 UTC │
	│ ssh     │ -p cert-options-319273 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-319273       │ jenkins │ v1.37.0 │ 27 Oct 25 19:56 UTC │ 27 Oct 25 19:56 UTC │
	│ delete  │ -p cert-options-319273                                                                                                                                                                                                                        │ cert-options-319273       │ jenkins │ v1.37.0 │ 27 Oct 25 19:56 UTC │ 27 Oct 25 19:56 UTC │
	│ start   │ -p old-k8s-version-942644 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-942644    │ jenkins │ v1.37.0 │ 27 Oct 25 19:56 UTC │ 27 Oct 25 19:57 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-942644 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-942644    │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │                     │
	│ stop    │ -p old-k8s-version-942644 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-942644    │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │ 27 Oct 25 19:58 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-942644 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-942644    │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │ 27 Oct 25 19:58 UTC │
	│ start   │ -p old-k8s-version-942644 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-942644    │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │ 27 Oct 25 19:59 UTC │
	│ start   │ -p cert-expiration-280013 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-280013    │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │ 27 Oct 25 19:58 UTC │
	│ delete  │ -p cert-expiration-280013                                                                                                                                                                                                                     │ cert-expiration-280013    │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │ 27 Oct 25 19:59 UTC │
	│ start   │ -p no-preload-300878 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-300878         │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │ 27 Oct 25 20:00 UTC │
	│ image   │ old-k8s-version-942644 image list --format=json                                                                                                                                                                                               │ old-k8s-version-942644    │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │ 27 Oct 25 19:59 UTC │
	│ pause   │ -p old-k8s-version-942644 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-942644    │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │                     │
	│ delete  │ -p old-k8s-version-942644                                                                                                                                                                                                                     │ old-k8s-version-942644    │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │ 27 Oct 25 19:59 UTC │
	│ delete  │ -p old-k8s-version-942644                                                                                                                                                                                                                     │ old-k8s-version-942644    │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │ 27 Oct 25 19:59 UTC │
	│ start   │ -p embed-certs-629838 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-629838        │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │ 27 Oct 25 20:01 UTC │
	│ addons  │ enable metrics-server -p no-preload-300878 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-300878         │ jenkins │ v1.37.0 │ 27 Oct 25 20:00 UTC │                     │
	│ stop    │ -p no-preload-300878 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-300878         │ jenkins │ v1.37.0 │ 27 Oct 25 20:00 UTC │ 27 Oct 25 20:00 UTC │
	│ addons  │ enable dashboard -p no-preload-300878 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-300878         │ jenkins │ v1.37.0 │ 27 Oct 25 20:00 UTC │ 27 Oct 25 20:00 UTC │
	│ start   │ -p no-preload-300878 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-300878         │ jenkins │ v1.37.0 │ 27 Oct 25 20:00 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-629838 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-629838        │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 20:00:43
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 20:00:43.248990  460048 out.go:360] Setting OutFile to fd 1 ...
	I1027 20:00:43.249152  460048 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:00:43.249162  460048 out.go:374] Setting ErrFile to fd 2...
	I1027 20:00:43.249167  460048 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:00:43.249414  460048 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 20:00:43.249878  460048 out.go:368] Setting JSON to false
	I1027 20:00:43.250911  460048 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9796,"bootTime":1761585448,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1027 20:00:43.251011  460048 start.go:141] virtualization:  
	I1027 20:00:43.254860  460048 out.go:179] * [no-preload-300878] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 20:00:43.257826  460048 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 20:00:43.257946  460048 notify.go:220] Checking for updates...
	I1027 20:00:43.263771  460048 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 20:00:43.266722  460048 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 20:00:43.269585  460048 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-266035/.minikube
	I1027 20:00:43.272529  460048 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 20:00:43.275314  460048 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 20:00:43.278726  460048 config.go:182] Loaded profile config "no-preload-300878": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:00:43.279380  460048 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 20:00:43.304657  460048 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 20:00:43.304772  460048 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 20:00:43.365899  460048 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-27 20:00:43.355815365 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 20:00:43.366010  460048 docker.go:318] overlay module found
	I1027 20:00:43.369166  460048 out.go:179] * Using the docker driver based on existing profile
	I1027 20:00:43.371981  460048 start.go:305] selected driver: docker
	I1027 20:00:43.372005  460048 start.go:925] validating driver "docker" against &{Name:no-preload-300878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-300878 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 20:00:43.372113  460048 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 20:00:43.372826  460048 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 20:00:43.429126  460048 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-27 20:00:43.419761785 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 20:00:43.429475  460048 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 20:00:43.429507  460048 cni.go:84] Creating CNI manager for ""
	I1027 20:00:43.429566  460048 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 20:00:43.429617  460048 start.go:349] cluster config:
	{Name:no-preload-300878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-300878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 20:00:43.434510  460048 out.go:179] * Starting "no-preload-300878" primary control-plane node in "no-preload-300878" cluster
	I1027 20:00:43.437254  460048 cache.go:123] Beginning downloading kic base image for docker with crio
	I1027 20:00:43.440241  460048 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 20:00:43.443084  460048 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 20:00:43.443242  460048 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/config.json ...
	I1027 20:00:43.443567  460048 cache.go:107] acquiring lock: {Name:mk2c9b32a28909ddde1ea9e1562c451629f3a8bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 20:00:43.443730  460048 cache.go:107] acquiring lock: {Name:mk8f67f1010641520ce2aed88e36df35defaec67 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 20:00:43.443790  460048 cache.go:115] /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1027 20:00:43.443803  460048 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 76.519µs
	I1027 20:00:43.443818  460048 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1027 20:00:43.443836  460048 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 20:00:43.443675  460048 cache.go:115] /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1027 20:00:43.444038  460048 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 482.654µs
	I1027 20:00:43.444072  460048 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1027 20:00:43.444092  460048 cache.go:107] acquiring lock: {Name:mk5a3679f1cf078979f9b59308ac24da693653f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 20:00:43.444154  460048 cache.go:115] /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1027 20:00:43.444167  460048 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 76.938µs
	I1027 20:00:43.444186  460048 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1027 20:00:43.444209  460048 cache.go:107] acquiring lock: {Name:mk6af7dde40e27f19a53963487980377af2c3c95 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 20:00:43.444260  460048 cache.go:115] /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1027 20:00:43.444270  460048 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 67.756µs
	I1027 20:00:43.444345  460048 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1027 20:00:43.444386  460048 cache.go:107] acquiring lock: {Name:mk263e9fca65865b31b3432ab012737135a60a06 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 20:00:43.444431  460048 cache.go:115] /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1027 20:00:43.444442  460048 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 58.066µs
	I1027 20:00:43.444462  460048 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1027 20:00:43.444472  460048 cache.go:107] acquiring lock: {Name:mkfced02b35956836ba86d3e97965fe21c458ea4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 20:00:43.444504  460048 cache.go:115] /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1027 20:00:43.444509  460048 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 38.62µs
	I1027 20:00:43.444515  460048 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1027 20:00:43.444510  460048 cache.go:107] acquiring lock: {Name:mk41739ca1e3ab4374125f086ea6ae568ba48650 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 20:00:43.444539  460048 cache.go:107] acquiring lock: {Name:mk633cfcec5e23624dd56cce5b9a2941a9eb26ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 20:00:43.444583  460048 cache.go:115] /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1027 20:00:43.444583  460048 cache.go:115] /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1027 20:00:43.444592  460048 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 90.934µs
	I1027 20:00:43.444601  460048 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1027 20:00:43.444593  460048 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 55.268µs
	I1027 20:00:43.444617  460048 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21801-266035/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1027 20:00:43.444629  460048 cache.go:87] Successfully saved all images to host disk.
	I1027 20:00:43.464654  460048 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 20:00:43.464673  460048 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 20:00:43.464686  460048 cache.go:232] Successfully downloaded all kic artifacts
	I1027 20:00:43.464715  460048 start.go:360] acquireMachinesLock for no-preload-300878: {Name:mk35847aee9eb4cb8c66d589a420d0e6e5324ab7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 20:00:43.464778  460048 start.go:364] duration metric: took 38.243µs to acquireMachinesLock for "no-preload-300878"
	I1027 20:00:43.464805  460048 start.go:96] Skipping create...Using existing machine configuration
	I1027 20:00:43.464814  460048 fix.go:54] fixHost starting: 
	I1027 20:00:43.465074  460048 cli_runner.go:164] Run: docker container inspect no-preload-300878 --format={{.State.Status}}
	I1027 20:00:43.485599  460048 fix.go:112] recreateIfNeeded on no-preload-300878: state=Stopped err=<nil>
	W1027 20:00:43.485630  460048 fix.go:138] unexpected machine state, will restart: <nil>
	W1027 20:00:43.643764  456180 node_ready.go:57] node "embed-certs-629838" has "Ready":"False" status (will retry)
	W1027 20:00:46.143597  456180 node_ready.go:57] node "embed-certs-629838" has "Ready":"False" status (will retry)
	I1027 20:00:43.490814  460048 out.go:252] * Restarting existing docker container for "no-preload-300878" ...
	I1027 20:00:43.490962  460048 cli_runner.go:164] Run: docker start no-preload-300878
	I1027 20:00:43.768686  460048 cli_runner.go:164] Run: docker container inspect no-preload-300878 --format={{.State.Status}}
	I1027 20:00:43.793765  460048 kic.go:430] container "no-preload-300878" state is running.
	I1027 20:00:43.794235  460048 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-300878
	I1027 20:00:43.820354  460048 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/config.json ...
	I1027 20:00:43.820583  460048 machine.go:93] provisionDockerMachine start ...
	I1027 20:00:43.820641  460048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-300878
	I1027 20:00:43.846391  460048 main.go:141] libmachine: Using SSH client type: native
	I1027 20:00:43.846715  460048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1027 20:00:43.846724  460048 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 20:00:43.847392  460048 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1027 20:00:47.023033  460048 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-300878
	
	I1027 20:00:47.023061  460048 ubuntu.go:182] provisioning hostname "no-preload-300878"
	I1027 20:00:47.023130  460048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-300878
	I1027 20:00:47.040957  460048 main.go:141] libmachine: Using SSH client type: native
	I1027 20:00:47.041279  460048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1027 20:00:47.041290  460048 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-300878 && echo "no-preload-300878" | sudo tee /etc/hostname
	I1027 20:00:47.205751  460048 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-300878
	
	I1027 20:00:47.205826  460048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-300878
	I1027 20:00:47.226231  460048 main.go:141] libmachine: Using SSH client type: native
	I1027 20:00:47.226534  460048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1027 20:00:47.226552  460048 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-300878' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-300878/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-300878' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 20:00:47.379423  460048 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 20:00:47.379450  460048 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21801-266035/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-266035/.minikube}
	I1027 20:00:47.379472  460048 ubuntu.go:190] setting up certificates
	I1027 20:00:47.379482  460048 provision.go:84] configureAuth start
	I1027 20:00:47.379558  460048 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-300878
	I1027 20:00:47.397123  460048 provision.go:143] copyHostCerts
	I1027 20:00:47.397195  460048 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem, removing ...
	I1027 20:00:47.397217  460048 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem
	I1027 20:00:47.397294  460048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem (1078 bytes)
	I1027 20:00:47.397407  460048 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem, removing ...
	I1027 20:00:47.397419  460048 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem
	I1027 20:00:47.397446  460048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem (1123 bytes)
	I1027 20:00:47.397550  460048 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem, removing ...
	I1027 20:00:47.397559  460048 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem
	I1027 20:00:47.397584  460048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem (1675 bytes)
	I1027 20:00:47.397645  460048 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem org=jenkins.no-preload-300878 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-300878]
	I1027 20:00:47.742804  460048 provision.go:177] copyRemoteCerts
	I1027 20:00:47.742871  460048 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 20:00:47.742919  460048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-300878
	I1027 20:00:47.761647  460048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/no-preload-300878/id_rsa Username:docker}
	I1027 20:00:47.867088  460048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 20:00:47.887086  460048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 20:00:47.904447  460048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1027 20:00:47.921265  460048 provision.go:87] duration metric: took 541.747565ms to configureAuth
	I1027 20:00:47.921289  460048 ubuntu.go:206] setting minikube options for container-runtime
	I1027 20:00:47.921488  460048 config.go:182] Loaded profile config "no-preload-300878": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:00:47.921585  460048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-300878
	I1027 20:00:47.939558  460048 main.go:141] libmachine: Using SSH client type: native
	I1027 20:00:47.939870  460048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1027 20:00:47.939888  460048 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 20:00:48.285141  460048 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 20:00:48.285170  460048 machine.go:96] duration metric: took 4.464577318s to provisionDockerMachine
	I1027 20:00:48.285182  460048 start.go:293] postStartSetup for "no-preload-300878" (driver="docker")
	I1027 20:00:48.285194  460048 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 20:00:48.285269  460048 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 20:00:48.285322  460048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-300878
	I1027 20:00:48.306908  460048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/no-preload-300878/id_rsa Username:docker}
	I1027 20:00:48.410789  460048 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 20:00:48.414151  460048 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 20:00:48.414183  460048 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 20:00:48.414194  460048 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-266035/.minikube/addons for local assets ...
	I1027 20:00:48.414250  460048 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-266035/.minikube/files for local assets ...
	I1027 20:00:48.414335  460048 filesync.go:149] local asset: /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem -> 2678802.pem in /etc/ssl/certs
	I1027 20:00:48.414443  460048 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 20:00:48.422076  460048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem --> /etc/ssl/certs/2678802.pem (1708 bytes)
	I1027 20:00:48.441093  460048 start.go:296] duration metric: took 155.894679ms for postStartSetup
	I1027 20:00:48.441179  460048 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 20:00:48.441249  460048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-300878
	I1027 20:00:48.459471  460048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/no-preload-300878/id_rsa Username:docker}
	I1027 20:00:48.564246  460048 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 20:00:48.569027  460048 fix.go:56] duration metric: took 5.104205422s for fixHost
	I1027 20:00:48.569052  460048 start.go:83] releasing machines lock for "no-preload-300878", held for 5.104258581s
	I1027 20:00:48.569129  460048 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-300878
	I1027 20:00:48.585982  460048 ssh_runner.go:195] Run: cat /version.json
	I1027 20:00:48.586004  460048 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 20:00:48.586039  460048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-300878
	I1027 20:00:48.586057  460048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-300878
	I1027 20:00:48.610399  460048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/no-preload-300878/id_rsa Username:docker}
	I1027 20:00:48.627470  460048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/no-preload-300878/id_rsa Username:docker}
	I1027 20:00:48.723088  460048 ssh_runner.go:195] Run: systemctl --version
	I1027 20:00:48.816354  460048 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 20:00:48.859343  460048 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 20:00:48.864194  460048 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 20:00:48.864266  460048 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 20:00:48.872134  460048 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 20:00:48.872158  460048 start.go:495] detecting cgroup driver to use...
	I1027 20:00:48.872189  460048 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1027 20:00:48.872255  460048 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 20:00:48.887388  460048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 20:00:48.901767  460048 docker.go:218] disabling cri-docker service (if available) ...
	I1027 20:00:48.901849  460048 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 20:00:48.917547  460048 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 20:00:48.930889  460048 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 20:00:49.044798  460048 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 20:00:49.157587  460048 docker.go:234] disabling docker service ...
	I1027 20:00:49.157655  460048 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 20:00:49.172348  460048 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 20:00:49.185279  460048 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 20:00:49.316686  460048 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 20:00:49.445919  460048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 20:00:49.459873  460048 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 20:00:49.474251  460048 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 20:00:49.474317  460048 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:00:49.484058  460048 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 20:00:49.484146  460048 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:00:49.494753  460048 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:00:49.503637  460048 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:00:49.513203  460048 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 20:00:49.521153  460048 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:00:49.529964  460048 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:00:49.539893  460048 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:00:49.549133  460048 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 20:00:49.556660  460048 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 20:00:49.563943  460048 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:00:49.672021  460048 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 20:00:49.802564  460048 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 20:00:49.802687  460048 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 20:00:49.806576  460048 start.go:563] Will wait 60s for crictl version
	I1027 20:00:49.806700  460048 ssh_runner.go:195] Run: which crictl
	I1027 20:00:49.810249  460048 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 20:00:49.840880  460048 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 20:00:49.841033  460048 ssh_runner.go:195] Run: crio --version
	I1027 20:00:49.878382  460048 ssh_runner.go:195] Run: crio --version
	I1027 20:00:49.909684  460048 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1027 20:00:48.144668  456180 node_ready.go:57] node "embed-certs-629838" has "Ready":"False" status (will retry)
	W1027 20:00:50.644518  456180 node_ready.go:57] node "embed-certs-629838" has "Ready":"False" status (will retry)
	I1027 20:00:49.912465  460048 cli_runner.go:164] Run: docker network inspect no-preload-300878 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 20:00:49.928749  460048 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1027 20:00:49.932660  460048 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 20:00:49.942399  460048 kubeadm.go:883] updating cluster {Name:no-preload-300878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-300878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 20:00:49.942516  460048 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 20:00:49.942575  460048 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 20:00:49.984802  460048 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 20:00:49.984828  460048 cache_images.go:85] Images are preloaded, skipping loading
	I1027 20:00:49.984836  460048 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1027 20:00:49.984924  460048 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-300878 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-300878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 20:00:49.985007  460048 ssh_runner.go:195] Run: crio config
	I1027 20:00:50.068982  460048 cni.go:84] Creating CNI manager for ""
	I1027 20:00:50.069009  460048 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 20:00:50.069025  460048 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 20:00:50.069049  460048 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-300878 NodeName:no-preload-300878 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 20:00:50.069191  460048 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-300878"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 20:00:50.069269  460048 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 20:00:50.078477  460048 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 20:00:50.078610  460048 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 20:00:50.087595  460048 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1027 20:00:50.102759  460048 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 20:00:50.117365  460048 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1027 20:00:50.131842  460048 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1027 20:00:50.136631  460048 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 20:00:50.148734  460048 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:00:50.275474  460048 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 20:00:50.291927  460048 certs.go:69] Setting up /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878 for IP: 192.168.85.2
	I1027 20:00:50.291950  460048 certs.go:195] generating shared ca certs ...
	I1027 20:00:50.291967  460048 certs.go:227] acquiring lock for ca certs: {Name:mk172548fc54811b51040cc0201d01eb4b3dd19b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:00:50.292118  460048 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21801-266035/.minikube/ca.key
	I1027 20:00:50.292165  460048 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.key
	I1027 20:00:50.292177  460048 certs.go:257] generating profile certs ...
	I1027 20:00:50.292259  460048 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/client.key
	I1027 20:00:50.292331  460048 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/apiserver.key.f5d283a0
	I1027 20:00:50.292375  460048 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/proxy-client.key
	I1027 20:00:50.292486  460048 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880.pem (1338 bytes)
	W1027 20:00:50.292519  460048 certs.go:480] ignoring /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880_empty.pem, impossibly tiny 0 bytes
	I1027 20:00:50.292532  460048 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 20:00:50.292557  460048 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem (1078 bytes)
	I1027 20:00:50.292588  460048 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem (1123 bytes)
	I1027 20:00:50.292612  460048 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem (1675 bytes)
	I1027 20:00:50.292660  460048 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem (1708 bytes)
	I1027 20:00:50.293262  460048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 20:00:50.313980  460048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 20:00:50.333715  460048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 20:00:50.353491  460048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 20:00:50.380563  460048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1027 20:00:50.401345  460048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 20:00:50.424662  460048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 20:00:50.460353  460048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1027 20:00:50.482812  460048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880.pem --> /usr/share/ca-certificates/267880.pem (1338 bytes)
	I1027 20:00:50.514546  460048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem --> /usr/share/ca-certificates/2678802.pem (1708 bytes)
	I1027 20:00:50.539427  460048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 20:00:50.558701  460048 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 20:00:50.573331  460048 ssh_runner.go:195] Run: openssl version
	I1027 20:00:50.580066  460048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/267880.pem && ln -fs /usr/share/ca-certificates/267880.pem /etc/ssl/certs/267880.pem"
	I1027 20:00:50.589638  460048 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/267880.pem
	I1027 20:00:50.593444  460048 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 19:04 /usr/share/ca-certificates/267880.pem
	I1027 20:00:50.593532  460048 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/267880.pem
	I1027 20:00:50.637627  460048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/267880.pem /etc/ssl/certs/51391683.0"
	I1027 20:00:50.646347  460048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2678802.pem && ln -fs /usr/share/ca-certificates/2678802.pem /etc/ssl/certs/2678802.pem"
	I1027 20:00:50.654499  460048 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2678802.pem
	I1027 20:00:50.658400  460048 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 19:04 /usr/share/ca-certificates/2678802.pem
	I1027 20:00:50.658499  460048 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2678802.pem
	I1027 20:00:50.699412  460048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2678802.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 20:00:50.707376  460048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 20:00:50.716066  460048 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:00:50.719839  460048 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:00:50.719905  460048 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:00:50.762738  460048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 20:00:50.770711  460048 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 20:00:50.774373  460048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 20:00:50.816259  460048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 20:00:50.857704  460048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 20:00:50.901020  460048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 20:00:50.957863  460048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 20:00:51.010232  460048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 20:00:51.087892  460048 kubeadm.go:400] StartCluster: {Name:no-preload-300878 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-300878 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 20:00:51.088008  460048 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 20:00:51.088134  460048 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 20:00:51.175584  460048 cri.go:89] found id: "13d030edd8243f160331166818aa22d925ebe9d23d02c061f90430ac5760b9ea"
	I1027 20:00:51.175607  460048 cri.go:89] found id: "e280576b7cd349242f75cfabe39f57b429b7f1be59a692085c1ac72054e39d40"
	I1027 20:00:51.175613  460048 cri.go:89] found id: "2e749e0d2383f2957caa1a338e460f11d3d14d875b53417f4e0cd2479aad76e0"
	I1027 20:00:51.175616  460048 cri.go:89] found id: "75c7134600c5c012795172e75d3a8a7ce7dfa5f7d06a557943649171a009abd6"
	I1027 20:00:51.175620  460048 cri.go:89] found id: ""
	I1027 20:00:51.175720  460048 ssh_runner.go:195] Run: sudo runc list -f json
	W1027 20:00:51.196561  460048 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T20:00:51Z" level=error msg="open /run/runc: no such file or directory"
	I1027 20:00:51.196701  460048 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 20:00:51.209526  460048 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1027 20:00:51.209543  460048 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1027 20:00:51.209624  460048 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 20:00:51.220532  460048 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 20:00:51.221491  460048 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-300878" does not appear in /home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 20:00:51.222110  460048 kubeconfig.go:62] /home/jenkins/minikube-integration/21801-266035/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-300878" cluster setting kubeconfig missing "no-preload-300878" context setting]
	I1027 20:00:51.223302  460048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/kubeconfig: {Name:mk0a03a594f8d9f9199b1c7cff7c550cec8414c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:00:51.225281  460048 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 20:00:51.238723  460048 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1027 20:00:51.238762  460048 kubeadm.go:601] duration metric: took 29.21385ms to restartPrimaryControlPlane
	I1027 20:00:51.238773  460048 kubeadm.go:402] duration metric: took 150.893016ms to StartCluster
	I1027 20:00:51.238787  460048 settings.go:142] acquiring lock: {Name:mk49b9e2e9b7901c6b51e89b8b1e8ee7f9e88107 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:00:51.238857  460048 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 20:00:51.240421  460048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/kubeconfig: {Name:mk0a03a594f8d9f9199b1c7cff7c550cec8414c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:00:51.240656  460048 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 20:00:51.241146  460048 config.go:182] Loaded profile config "no-preload-300878": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:00:51.241238  460048 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 20:00:51.241394  460048 addons.go:69] Setting storage-provisioner=true in profile "no-preload-300878"
	I1027 20:00:51.241417  460048 addons.go:238] Setting addon storage-provisioner=true in "no-preload-300878"
	W1027 20:00:51.241438  460048 addons.go:247] addon storage-provisioner should already be in state true
	I1027 20:00:51.241475  460048 host.go:66] Checking if "no-preload-300878" exists ...
	I1027 20:00:51.242230  460048 cli_runner.go:164] Run: docker container inspect no-preload-300878 --format={{.State.Status}}
	I1027 20:00:51.242444  460048 addons.go:69] Setting dashboard=true in profile "no-preload-300878"
	I1027 20:00:51.242463  460048 addons.go:238] Setting addon dashboard=true in "no-preload-300878"
	W1027 20:00:51.242489  460048 addons.go:247] addon dashboard should already be in state true
	I1027 20:00:51.242519  460048 host.go:66] Checking if "no-preload-300878" exists ...
	I1027 20:00:51.242824  460048 addons.go:69] Setting default-storageclass=true in profile "no-preload-300878"
	I1027 20:00:51.242856  460048 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-300878"
	I1027 20:00:51.243062  460048 cli_runner.go:164] Run: docker container inspect no-preload-300878 --format={{.State.Status}}
	I1027 20:00:51.243230  460048 cli_runner.go:164] Run: docker container inspect no-preload-300878 --format={{.State.Status}}
	I1027 20:00:51.247618  460048 out.go:179] * Verifying Kubernetes components...
	I1027 20:00:51.265884  460048 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:00:51.284587  460048 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1027 20:00:51.292743  460048 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1027 20:00:51.296357  460048 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1027 20:00:51.296381  460048 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1027 20:00:51.296486  460048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-300878
	I1027 20:00:51.302938  460048 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 20:00:51.305903  460048 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 20:00:51.305925  460048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 20:00:51.305991  460048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-300878
	I1027 20:00:51.308305  460048 addons.go:238] Setting addon default-storageclass=true in "no-preload-300878"
	W1027 20:00:51.308326  460048 addons.go:247] addon default-storageclass should already be in state true
	I1027 20:00:51.308353  460048 host.go:66] Checking if "no-preload-300878" exists ...
	I1027 20:00:51.308751  460048 cli_runner.go:164] Run: docker container inspect no-preload-300878 --format={{.State.Status}}
	I1027 20:00:51.355188  460048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/no-preload-300878/id_rsa Username:docker}
	I1027 20:00:51.364113  460048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/no-preload-300878/id_rsa Username:docker}
	I1027 20:00:51.369643  460048 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 20:00:51.369669  460048 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 20:00:51.369744  460048 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-300878
	I1027 20:00:51.397281  460048 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/no-preload-300878/id_rsa Username:docker}
	I1027 20:00:51.644677  460048 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 20:00:51.652607  460048 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1027 20:00:51.652627  460048 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1027 20:00:51.664898  460048 node_ready.go:35] waiting up to 6m0s for node "no-preload-300878" to be "Ready" ...
	I1027 20:00:51.690636  460048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 20:00:51.706228  460048 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1027 20:00:51.706304  460048 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1027 20:00:51.723186  460048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 20:00:51.754355  460048 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1027 20:00:51.754426  460048 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1027 20:00:51.820465  460048 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1027 20:00:51.820531  460048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1027 20:00:51.919454  460048 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1027 20:00:51.919521  460048 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1027 20:00:51.973952  460048 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1027 20:00:51.974026  460048 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1027 20:00:51.998815  460048 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1027 20:00:51.998901  460048 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1027 20:00:52.024995  460048 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1027 20:00:52.025075  460048 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1027 20:00:52.047441  460048 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 20:00:52.047525  460048 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1027 20:00:52.069762  460048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1027 20:00:53.143628  456180 node_ready.go:57] node "embed-certs-629838" has "Ready":"False" status (will retry)
	W1027 20:00:55.147605  456180 node_ready.go:57] node "embed-certs-629838" has "Ready":"False" status (will retry)
	I1027 20:00:55.352555  460048 node_ready.go:49] node "no-preload-300878" is "Ready"
	I1027 20:00:55.352586  460048 node_ready.go:38] duration metric: took 3.68765859s for node "no-preload-300878" to be "Ready" ...
	I1027 20:00:55.352601  460048 api_server.go:52] waiting for apiserver process to appear ...
	I1027 20:00:55.352661  460048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 20:00:57.162052  460048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.471326553s)
	I1027 20:00:57.162124  460048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.438874919s)
	I1027 20:00:57.162376  460048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.09252511s)
	I1027 20:00:57.162509  460048 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.80983238s)
	I1027 20:00:57.162530  460048 api_server.go:72] duration metric: took 5.921842206s to wait for apiserver process to appear ...
	I1027 20:00:57.162537  460048 api_server.go:88] waiting for apiserver healthz status ...
	I1027 20:00:57.162555  460048 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1027 20:00:57.165392  460048 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-300878 addons enable metrics-server
	
	I1027 20:00:57.173993  460048 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1027 20:00:57.175100  460048 api_server.go:141] control plane version: v1.34.1
	I1027 20:00:57.175127  460048 api_server.go:131] duration metric: took 12.581267ms to wait for apiserver health ...
	I1027 20:00:57.175137  460048 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 20:00:57.179809  460048 system_pods.go:59] 8 kube-system pods found
	I1027 20:00:57.179842  460048 system_pods.go:61] "coredns-66bc5c9577-jlg4z" [9692f7a1-291c-4c66-abc3-e0c78f66bc4c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:00:57.179851  460048 system_pods.go:61] "etcd-no-preload-300878" [e8f1c131-e6a0-47af-8491-bbdfbd04eb72] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 20:00:57.179857  460048 system_pods.go:61] "kindnet-smnp2" [cc388f93-6d32-42d4-b690-08e5713d67c1] Running
	I1027 20:00:57.179868  460048 system_pods.go:61] "kube-apiserver-no-preload-300878" [4e5fb347-6c19-4da9-b4df-26f45bf1d362] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 20:00:57.179870  460048 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1027 20:00:57.179875  460048 system_pods.go:61] "kube-controller-manager-no-preload-300878" [103a9dbe-a475-46c9-8450-1708dc7bded1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 20:00:57.179892  460048 system_pods.go:61] "kube-proxy-wpv4w" [c80663df-d0c2-41dd-a3ec-f4d6652536c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1027 20:00:57.179899  460048 system_pods.go:61] "kube-scheduler-no-preload-300878" [6c301a1e-1d96-4826-b2c9-eb84d99e6a46] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 20:00:57.179903  460048 system_pods.go:61] "storage-provisioner" [54924ba0-8604-4333-8e8f-45bac06fffde] Running
	I1027 20:00:57.179911  460048 system_pods.go:74] duration metric: took 4.768539ms to wait for pod list to return data ...
	I1027 20:00:57.179919  460048 default_sa.go:34] waiting for default service account to be created ...
	I1027 20:00:57.182971  460048 default_sa.go:45] found service account: "default"
	I1027 20:00:57.183002  460048 default_sa.go:55] duration metric: took 3.076705ms for default service account to be created ...
	I1027 20:00:57.183011  460048 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 20:00:57.182967  460048 addons.go:514] duration metric: took 5.941738344s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1027 20:00:57.190284  460048 system_pods.go:86] 8 kube-system pods found
	I1027 20:00:57.190316  460048 system_pods.go:89] "coredns-66bc5c9577-jlg4z" [9692f7a1-291c-4c66-abc3-e0c78f66bc4c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:00:57.190326  460048 system_pods.go:89] "etcd-no-preload-300878" [e8f1c131-e6a0-47af-8491-bbdfbd04eb72] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 20:00:57.190331  460048 system_pods.go:89] "kindnet-smnp2" [cc388f93-6d32-42d4-b690-08e5713d67c1] Running
	I1027 20:00:57.190338  460048 system_pods.go:89] "kube-apiserver-no-preload-300878" [4e5fb347-6c19-4da9-b4df-26f45bf1d362] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 20:00:57.190346  460048 system_pods.go:89] "kube-controller-manager-no-preload-300878" [103a9dbe-a475-46c9-8450-1708dc7bded1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 20:00:57.190352  460048 system_pods.go:89] "kube-proxy-wpv4w" [c80663df-d0c2-41dd-a3ec-f4d6652536c8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1027 20:00:57.190358  460048 system_pods.go:89] "kube-scheduler-no-preload-300878" [6c301a1e-1d96-4826-b2c9-eb84d99e6a46] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 20:00:57.190362  460048 system_pods.go:89] "storage-provisioner" [54924ba0-8604-4333-8e8f-45bac06fffde] Running
	I1027 20:00:57.190369  460048 system_pods.go:126] duration metric: took 7.352967ms to wait for k8s-apps to be running ...
	I1027 20:00:57.190376  460048 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 20:00:57.190434  460048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 20:00:57.210711  460048 system_svc.go:56] duration metric: took 20.325362ms WaitForService to wait for kubelet
	I1027 20:00:57.210789  460048 kubeadm.go:586] duration metric: took 5.970098557s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 20:00:57.210824  460048 node_conditions.go:102] verifying NodePressure condition ...
	I1027 20:00:57.234859  460048 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 20:00:57.234947  460048 node_conditions.go:123] node cpu capacity is 2
	I1027 20:00:57.234975  460048 node_conditions.go:105] duration metric: took 24.129382ms to run NodePressure ...
	I1027 20:00:57.235031  460048 start.go:241] waiting for startup goroutines ...
	I1027 20:00:57.235054  460048 start.go:246] waiting for cluster config update ...
	I1027 20:00:57.235078  460048 start.go:255] writing updated cluster config ...
	I1027 20:00:57.235383  460048 ssh_runner.go:195] Run: rm -f paused
	I1027 20:00:57.239135  460048 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 20:00:57.246485  460048 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jlg4z" in "kube-system" namespace to be "Ready" or be gone ...
	W1027 20:00:57.643534  456180 node_ready.go:57] node "embed-certs-629838" has "Ready":"False" status (will retry)
	W1027 20:00:59.643601  456180 node_ready.go:57] node "embed-certs-629838" has "Ready":"False" status (will retry)
	W1027 20:00:59.251567  460048 pod_ready.go:104] pod "coredns-66bc5c9577-jlg4z" is not "Ready", error: <nil>
	W1027 20:01:01.255841  460048 pod_ready.go:104] pod "coredns-66bc5c9577-jlg4z" is not "Ready", error: <nil>
	W1027 20:01:01.643636  456180 node_ready.go:57] node "embed-certs-629838" has "Ready":"False" status (will retry)
	I1027 20:01:03.651232  456180 node_ready.go:49] node "embed-certs-629838" is "Ready"
	I1027 20:01:03.651272  456180 node_ready.go:38] duration metric: took 41.010553394s for node "embed-certs-629838" to be "Ready" ...
	I1027 20:01:03.651303  456180 api_server.go:52] waiting for apiserver process to appear ...
	I1027 20:01:03.651361  456180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 20:01:03.684097  456180 api_server.go:72] duration metric: took 42.022950613s to wait for apiserver process to appear ...
	I1027 20:01:03.684137  456180 api_server.go:88] waiting for apiserver healthz status ...
	I1027 20:01:03.684155  456180 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 20:01:03.692816  456180 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1027 20:01:03.696544  456180 api_server.go:141] control plane version: v1.34.1
	I1027 20:01:03.696569  456180 api_server.go:131] duration metric: took 12.425973ms to wait for apiserver health ...
	I1027 20:01:03.696582  456180 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 20:01:03.702387  456180 system_pods.go:59] 8 kube-system pods found
	I1027 20:01:03.702471  456180 system_pods.go:61] "coredns-66bc5c9577-ch8jv" [31b0e0f4-af1b-40c7-9b20-0941025a0e20] Pending
	I1027 20:01:03.702492  456180 system_pods.go:61] "etcd-embed-certs-629838" [8a3ea38f-5f58-4bc1-92a6-f623c5bbf89a] Running
	I1027 20:01:03.702537  456180 system_pods.go:61] "kindnet-cfqpk" [26bd03b9-d7d1-42d3-9a0f-57a0079df4df] Running
	I1027 20:01:03.702562  456180 system_pods.go:61] "kube-apiserver-embed-certs-629838" [5a679f7d-bd7f-41f2-8494-368533168d94] Running
	I1027 20:01:03.702587  456180 system_pods.go:61] "kube-controller-manager-embed-certs-629838" [208c550a-f378-40f6-aa34-d3e6f628ec0b] Running
	I1027 20:01:03.702623  456180 system_pods.go:61] "kube-proxy-bwql6" [41eb9367-187c-4241-967f-46d7e5ff9003] Running
	I1027 20:01:03.702649  456180 system_pods.go:61] "kube-scheduler-embed-certs-629838" [79fc7d25-8b0a-46c9-82f0-714850ebf675] Running
	I1027 20:01:03.702670  456180 system_pods.go:61] "storage-provisioner" [39cc3c46-ef65-4c2e-8c82-6273f639f702] Pending
	I1027 20:01:03.702709  456180 system_pods.go:74] duration metric: took 6.120345ms to wait for pod list to return data ...
	I1027 20:01:03.702735  456180 default_sa.go:34] waiting for default service account to be created ...
	I1027 20:01:03.715999  456180 default_sa.go:45] found service account: "default"
	I1027 20:01:03.716072  456180 default_sa.go:55] duration metric: took 13.313595ms for default service account to be created ...
	I1027 20:01:03.716096  456180 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 20:01:03.738761  456180 system_pods.go:86] 8 kube-system pods found
	I1027 20:01:03.738847  456180 system_pods.go:89] "coredns-66bc5c9577-ch8jv" [31b0e0f4-af1b-40c7-9b20-0941025a0e20] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:01:03.738868  456180 system_pods.go:89] "etcd-embed-certs-629838" [8a3ea38f-5f58-4bc1-92a6-f623c5bbf89a] Running
	I1027 20:01:03.738910  456180 system_pods.go:89] "kindnet-cfqpk" [26bd03b9-d7d1-42d3-9a0f-57a0079df4df] Running
	I1027 20:01:03.738934  456180 system_pods.go:89] "kube-apiserver-embed-certs-629838" [5a679f7d-bd7f-41f2-8494-368533168d94] Running
	I1027 20:01:03.738956  456180 system_pods.go:89] "kube-controller-manager-embed-certs-629838" [208c550a-f378-40f6-aa34-d3e6f628ec0b] Running
	I1027 20:01:03.739006  456180 system_pods.go:89] "kube-proxy-bwql6" [41eb9367-187c-4241-967f-46d7e5ff9003] Running
	I1027 20:01:03.739033  456180 system_pods.go:89] "kube-scheduler-embed-certs-629838" [79fc7d25-8b0a-46c9-82f0-714850ebf675] Running
	I1027 20:01:03.739055  456180 system_pods.go:89] "storage-provisioner" [39cc3c46-ef65-4c2e-8c82-6273f639f702] Pending
	I1027 20:01:03.739108  456180 retry.go:31] will retry after 300.523344ms: missing components: kube-dns
	I1027 20:01:04.053312  456180 system_pods.go:86] 8 kube-system pods found
	I1027 20:01:04.053396  456180 system_pods.go:89] "coredns-66bc5c9577-ch8jv" [31b0e0f4-af1b-40c7-9b20-0941025a0e20] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:01:04.053440  456180 system_pods.go:89] "etcd-embed-certs-629838" [8a3ea38f-5f58-4bc1-92a6-f623c5bbf89a] Running
	I1027 20:01:04.053462  456180 system_pods.go:89] "kindnet-cfqpk" [26bd03b9-d7d1-42d3-9a0f-57a0079df4df] Running
	I1027 20:01:04.053501  456180 system_pods.go:89] "kube-apiserver-embed-certs-629838" [5a679f7d-bd7f-41f2-8494-368533168d94] Running
	I1027 20:01:04.053527  456180 system_pods.go:89] "kube-controller-manager-embed-certs-629838" [208c550a-f378-40f6-aa34-d3e6f628ec0b] Running
	I1027 20:01:04.053549  456180 system_pods.go:89] "kube-proxy-bwql6" [41eb9367-187c-4241-967f-46d7e5ff9003] Running
	I1027 20:01:04.053588  456180 system_pods.go:89] "kube-scheduler-embed-certs-629838" [79fc7d25-8b0a-46c9-82f0-714850ebf675] Running
	I1027 20:01:04.053617  456180 system_pods.go:89] "storage-provisioner" [39cc3c46-ef65-4c2e-8c82-6273f639f702] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 20:01:04.053667  456180 retry.go:31] will retry after 381.18252ms: missing components: kube-dns
	I1027 20:01:04.439775  456180 system_pods.go:86] 8 kube-system pods found
	I1027 20:01:04.439817  456180 system_pods.go:89] "coredns-66bc5c9577-ch8jv" [31b0e0f4-af1b-40c7-9b20-0941025a0e20] Running
	I1027 20:01:04.439825  456180 system_pods.go:89] "etcd-embed-certs-629838" [8a3ea38f-5f58-4bc1-92a6-f623c5bbf89a] Running
	I1027 20:01:04.439833  456180 system_pods.go:89] "kindnet-cfqpk" [26bd03b9-d7d1-42d3-9a0f-57a0079df4df] Running
	I1027 20:01:04.439839  456180 system_pods.go:89] "kube-apiserver-embed-certs-629838" [5a679f7d-bd7f-41f2-8494-368533168d94] Running
	I1027 20:01:04.439847  456180 system_pods.go:89] "kube-controller-manager-embed-certs-629838" [208c550a-f378-40f6-aa34-d3e6f628ec0b] Running
	I1027 20:01:04.439864  456180 system_pods.go:89] "kube-proxy-bwql6" [41eb9367-187c-4241-967f-46d7e5ff9003] Running
	I1027 20:01:04.439882  456180 system_pods.go:89] "kube-scheduler-embed-certs-629838" [79fc7d25-8b0a-46c9-82f0-714850ebf675] Running
	I1027 20:01:04.439886  456180 system_pods.go:89] "storage-provisioner" [39cc3c46-ef65-4c2e-8c82-6273f639f702] Running
	I1027 20:01:04.439893  456180 system_pods.go:126] duration metric: took 723.777096ms to wait for k8s-apps to be running ...
	I1027 20:01:04.439911  456180 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 20:01:04.439974  456180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 20:01:04.462472  456180 system_svc.go:56] duration metric: took 22.556577ms WaitForService to wait for kubelet
	I1027 20:01:04.462515  456180 kubeadm.go:586] duration metric: took 42.801374454s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 20:01:04.462534  456180 node_conditions.go:102] verifying NodePressure condition ...
	I1027 20:01:04.466108  456180 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 20:01:04.466147  456180 node_conditions.go:123] node cpu capacity is 2
	I1027 20:01:04.466167  456180 node_conditions.go:105] duration metric: took 3.622183ms to run NodePressure ...
	I1027 20:01:04.466185  456180 start.go:241] waiting for startup goroutines ...
	I1027 20:01:04.466199  456180 start.go:246] waiting for cluster config update ...
	I1027 20:01:04.466210  456180 start.go:255] writing updated cluster config ...
	I1027 20:01:04.466664  456180 ssh_runner.go:195] Run: rm -f paused
	I1027 20:01:04.470623  456180 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 20:01:04.476021  456180 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ch8jv" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:01:04.484149  456180 pod_ready.go:94] pod "coredns-66bc5c9577-ch8jv" is "Ready"
	I1027 20:01:04.484174  456180 pod_ready.go:86] duration metric: took 8.125196ms for pod "coredns-66bc5c9577-ch8jv" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:01:04.487165  456180 pod_ready.go:83] waiting for pod "etcd-embed-certs-629838" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:01:04.492005  456180 pod_ready.go:94] pod "etcd-embed-certs-629838" is "Ready"
	I1027 20:01:04.492038  456180 pod_ready.go:86] duration metric: took 4.845476ms for pod "etcd-embed-certs-629838" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:01:04.494442  456180 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-629838" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:01:04.500770  456180 pod_ready.go:94] pod "kube-apiserver-embed-certs-629838" is "Ready"
	I1027 20:01:04.500800  456180 pod_ready.go:86] duration metric: took 6.334345ms for pod "kube-apiserver-embed-certs-629838" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:01:04.505049  456180 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-629838" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:01:04.876081  456180 pod_ready.go:94] pod "kube-controller-manager-embed-certs-629838" is "Ready"
	I1027 20:01:04.876108  456180 pod_ready.go:86] duration metric: took 371.019398ms for pod "kube-controller-manager-embed-certs-629838" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:01:05.076997  456180 pod_ready.go:83] waiting for pod "kube-proxy-bwql6" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:01:05.476035  456180 pod_ready.go:94] pod "kube-proxy-bwql6" is "Ready"
	I1027 20:01:05.476059  456180 pod_ready.go:86] duration metric: took 399.032978ms for pod "kube-proxy-bwql6" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:01:05.676392  456180 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-629838" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:01:06.076018  456180 pod_ready.go:94] pod "kube-scheduler-embed-certs-629838" is "Ready"
	I1027 20:01:06.076047  456180 pod_ready.go:86] duration metric: took 399.627965ms for pod "kube-scheduler-embed-certs-629838" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:01:06.076061  456180 pod_ready.go:40] duration metric: took 1.605321973s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 20:01:06.167946  456180 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 20:01:06.169880  456180 out.go:179] * Done! kubectl is now configured to use "embed-certs-629838" cluster and "default" namespace by default
	W1027 20:01:03.258277  460048 pod_ready.go:104] pod "coredns-66bc5c9577-jlg4z" is not "Ready", error: <nil>
	W1027 20:01:05.752573  460048 pod_ready.go:104] pod "coredns-66bc5c9577-jlg4z" is not "Ready", error: <nil>
	W1027 20:01:07.753030  460048 pod_ready.go:104] pod "coredns-66bc5c9577-jlg4z" is not "Ready", error: <nil>
	W1027 20:01:10.252215  460048 pod_ready.go:104] pod "coredns-66bc5c9577-jlg4z" is not "Ready", error: <nil>
	W1027 20:01:12.252905  460048 pod_ready.go:104] pod "coredns-66bc5c9577-jlg4z" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 27 20:01:04 embed-certs-629838 crio[838]: time="2025-10-27T20:01:04.12102288Z" level=info msg="Created container 39ec50d6554d4cc0ca44e8145236f94521b92380b9e9ad9ec065318b3323f824: kube-system/coredns-66bc5c9577-ch8jv/coredns" id=9171693a-6edf-4c2c-8719-d491096b357d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 20:01:04 embed-certs-629838 crio[838]: time="2025-10-27T20:01:04.121787067Z" level=info msg="Starting container: 39ec50d6554d4cc0ca44e8145236f94521b92380b9e9ad9ec065318b3323f824" id=b3bf3f2a-8ead-42fb-9ff6-c3ca95d02c54 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 20:01:04 embed-certs-629838 crio[838]: time="2025-10-27T20:01:04.1300712Z" level=info msg="Started container" PID=1741 containerID=39ec50d6554d4cc0ca44e8145236f94521b92380b9e9ad9ec065318b3323f824 description=kube-system/coredns-66bc5c9577-ch8jv/coredns id=b3bf3f2a-8ead-42fb-9ff6-c3ca95d02c54 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ea9cd0036e25108698fcea9d9588ee77943b5f0599632ef11d4c02ce89fd89c7
	Oct 27 20:01:06 embed-certs-629838 crio[838]: time="2025-10-27T20:01:06.718450727Z" level=info msg="Running pod sandbox: default/busybox/POD" id=9b08622f-5edd-4af6-90b0-735730d9b15d name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 20:01:06 embed-certs-629838 crio[838]: time="2025-10-27T20:01:06.718513487Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:01:06 embed-certs-629838 crio[838]: time="2025-10-27T20:01:06.725748918Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:298897d9409531a4fe7b13bb386e609f107c511931266fea666f30021336af1e UID:00b9d871-3c8b-42a7-9c24-e1ac939805c4 NetNS:/var/run/netns/572c980c-caf9-489a-88ec-6c32cebd649e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d0d0}] Aliases:map[]}"
	Oct 27 20:01:06 embed-certs-629838 crio[838]: time="2025-10-27T20:01:06.725893571Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 27 20:01:06 embed-certs-629838 crio[838]: time="2025-10-27T20:01:06.738374517Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:298897d9409531a4fe7b13bb386e609f107c511931266fea666f30021336af1e UID:00b9d871-3c8b-42a7-9c24-e1ac939805c4 NetNS:/var/run/netns/572c980c-caf9-489a-88ec-6c32cebd649e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d0d0}] Aliases:map[]}"
	Oct 27 20:01:06 embed-certs-629838 crio[838]: time="2025-10-27T20:01:06.738657307Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 27 20:01:06 embed-certs-629838 crio[838]: time="2025-10-27T20:01:06.745037476Z" level=info msg="Ran pod sandbox 298897d9409531a4fe7b13bb386e609f107c511931266fea666f30021336af1e with infra container: default/busybox/POD" id=9b08622f-5edd-4af6-90b0-735730d9b15d name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 20:01:06 embed-certs-629838 crio[838]: time="2025-10-27T20:01:06.746878196Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=30adfa38-1d68-49fd-8e1e-a09b7ff4692e name=/runtime.v1.ImageService/ImageStatus
	Oct 27 20:01:06 embed-certs-629838 crio[838]: time="2025-10-27T20:01:06.747399584Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=30adfa38-1d68-49fd-8e1e-a09b7ff4692e name=/runtime.v1.ImageService/ImageStatus
	Oct 27 20:01:06 embed-certs-629838 crio[838]: time="2025-10-27T20:01:06.747561459Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=30adfa38-1d68-49fd-8e1e-a09b7ff4692e name=/runtime.v1.ImageService/ImageStatus
	Oct 27 20:01:06 embed-certs-629838 crio[838]: time="2025-10-27T20:01:06.754935241Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9dd43078-2e0b-46aa-922b-1fe5d74ea6d5 name=/runtime.v1.ImageService/PullImage
	Oct 27 20:01:06 embed-certs-629838 crio[838]: time="2025-10-27T20:01:06.756621036Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 27 20:01:08 embed-certs-629838 crio[838]: time="2025-10-27T20:01:08.836312094Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=9dd43078-2e0b-46aa-922b-1fe5d74ea6d5 name=/runtime.v1.ImageService/PullImage
	Oct 27 20:01:08 embed-certs-629838 crio[838]: time="2025-10-27T20:01:08.837386705Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=28dba442-8c62-4ad2-88d0-1149b1d09d4f name=/runtime.v1.ImageService/ImageStatus
	Oct 27 20:01:08 embed-certs-629838 crio[838]: time="2025-10-27T20:01:08.839257463Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=da463b20-aea5-484c-a6a4-55ee1bc81887 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 20:01:08 embed-certs-629838 crio[838]: time="2025-10-27T20:01:08.844611446Z" level=info msg="Creating container: default/busybox/busybox" id=8bc9d094-db05-49b4-bd2c-c74e0b5695a1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 20:01:08 embed-certs-629838 crio[838]: time="2025-10-27T20:01:08.844731558Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:01:08 embed-certs-629838 crio[838]: time="2025-10-27T20:01:08.84968049Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:01:08 embed-certs-629838 crio[838]: time="2025-10-27T20:01:08.850138643Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:01:08 embed-certs-629838 crio[838]: time="2025-10-27T20:01:08.866368037Z" level=info msg="Created container 4057912c177d5070cdc5333a8c19664f9c3a9c87a12802b9ce122acc081c9327: default/busybox/busybox" id=8bc9d094-db05-49b4-bd2c-c74e0b5695a1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 20:01:08 embed-certs-629838 crio[838]: time="2025-10-27T20:01:08.870100445Z" level=info msg="Starting container: 4057912c177d5070cdc5333a8c19664f9c3a9c87a12802b9ce122acc081c9327" id=3edd7234-1e68-4a54-aab6-cffd99477c70 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 20:01:08 embed-certs-629838 crio[838]: time="2025-10-27T20:01:08.872402387Z" level=info msg="Started container" PID=1791 containerID=4057912c177d5070cdc5333a8c19664f9c3a9c87a12802b9ce122acc081c9327 description=default/busybox/busybox id=3edd7234-1e68-4a54-aab6-cffd99477c70 name=/runtime.v1.RuntimeService/StartContainer sandboxID=298897d9409531a4fe7b13bb386e609f107c511931266fea666f30021336af1e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	4057912c177d5       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago        Running             busybox                   0                   298897d940953       busybox                                      default
	39ec50d6554d4       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      11 seconds ago       Running             coredns                   0                   ea9cd0036e251       coredns-66bc5c9577-ch8jv                     kube-system
	f38e8f3451591       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      11 seconds ago       Running             storage-provisioner       0                   8acdb199259cb       storage-provisioner                          kube-system
	386343cb61580       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      52 seconds ago       Running             kube-proxy                0                   78c89e3012988       kube-proxy-bwql6                             kube-system
	ff67f65188ead       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      52 seconds ago       Running             kindnet-cni               0                   367c1cb0821dc       kindnet-cfqpk                                kube-system
	7c0b1a1df8a67       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   b6d82746f68e8       kube-scheduler-embed-certs-629838            kube-system
	e8ef382e4be4b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   119bf4f4b0090       etcd-embed-certs-629838                      kube-system
	29c3b98a52904       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   53de3ce026578       kube-apiserver-embed-certs-629838            kube-system
	00dc6770d1d5c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   4c1be53f8351b       kube-controller-manager-embed-certs-629838   kube-system
	
	
	==> coredns [39ec50d6554d4cc0ca44e8145236f94521b92380b9e9ad9ec065318b3323f824] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38715 - 21609 "HINFO IN 5100042062554822778.5766201320253252911. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.036008424s
	
	
	==> describe nodes <==
	Name:               embed-certs-629838
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-629838
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=embed-certs-629838
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T20_00_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 20:00:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-629838
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 20:01:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 20:01:03 +0000   Mon, 27 Oct 2025 20:00:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 20:01:03 +0000   Mon, 27 Oct 2025 20:00:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 20:01:03 +0000   Mon, 27 Oct 2025 20:00:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 20:01:03 +0000   Mon, 27 Oct 2025 20:01:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-629838
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                6cfa2846-7c31-4e89-9dcc-f2fbb567f43d
	  Boot ID:                    23ea05b4-8203-4fb9-a84a-5deb4b091cbb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-ch8jv                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     54s
	  kube-system                 etcd-embed-certs-629838                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         59s
	  kube-system                 kindnet-cfqpk                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-embed-certs-629838             250m (12%)    0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-controller-manager-embed-certs-629838    200m (10%)    0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-proxy-bwql6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-embed-certs-629838             100m (5%)     0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 52s                kube-proxy       
	  Normal   Starting                 67s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 67s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  67s (x8 over 67s)  kubelet          Node embed-certs-629838 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    67s (x8 over 67s)  kubelet          Node embed-certs-629838 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     67s (x8 over 67s)  kubelet          Node embed-certs-629838 status is now: NodeHasSufficientPID
	  Normal   Starting                 59s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 59s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  59s                kubelet          Node embed-certs-629838 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s                kubelet          Node embed-certs-629838 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s                kubelet          Node embed-certs-629838 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           55s                node-controller  Node embed-certs-629838 event: Registered Node embed-certs-629838 in Controller
	  Normal   NodeReady                13s                kubelet          Node embed-certs-629838 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct27 19:36] overlayfs: idmapped layers are currently not supported
	[Oct27 19:37] overlayfs: idmapped layers are currently not supported
	[Oct27 19:38] overlayfs: idmapped layers are currently not supported
	[Oct27 19:39] overlayfs: idmapped layers are currently not supported
	[Oct27 19:40] overlayfs: idmapped layers are currently not supported
	[  +7.015891] overlayfs: idmapped layers are currently not supported
	[Oct27 19:41] overlayfs: idmapped layers are currently not supported
	[ +28.455216] overlayfs: idmapped layers are currently not supported
	[Oct27 19:42] overlayfs: idmapped layers are currently not supported
	[ +26.490174] overlayfs: idmapped layers are currently not supported
	[Oct27 19:44] overlayfs: idmapped layers are currently not supported
	[Oct27 19:45] overlayfs: idmapped layers are currently not supported
	[Oct27 19:47] overlayfs: idmapped layers are currently not supported
	[Oct27 19:49] overlayfs: idmapped layers are currently not supported
	[ +31.410335] overlayfs: idmapped layers are currently not supported
	[Oct27 19:51] overlayfs: idmapped layers are currently not supported
	[Oct27 19:53] overlayfs: idmapped layers are currently not supported
	[Oct27 19:54] overlayfs: idmapped layers are currently not supported
	[Oct27 19:55] overlayfs: idmapped layers are currently not supported
	[Oct27 19:56] overlayfs: idmapped layers are currently not supported
	[Oct27 19:57] overlayfs: idmapped layers are currently not supported
	[Oct27 19:58] overlayfs: idmapped layers are currently not supported
	[Oct27 19:59] overlayfs: idmapped layers are currently not supported
	[Oct27 20:00] overlayfs: idmapped layers are currently not supported
	[ +41.321877] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e8ef382e4be4bd584475434a9dcbdbbac8ed71b57af9507e97987ff6c7d3bb4f] <==
	{"level":"warn","ts":"2025-10-27T20:00:11.668280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:11.686503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:11.720722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:11.743881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:11.771488Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:11.791141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:11.825398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:11.841064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:11.891058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:11.907961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:11.926797Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:11.970404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:11.992791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:12.014657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:12.055781Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:12.104334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:12.107257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:12.141878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:12.166031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:12.204284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:12.246497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:12.260645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:12.284889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:12.328006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:12.507185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57550","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:01:16 up  2:43,  0 user,  load average: 3.55, 3.13, 2.66
	Linux embed-certs-629838 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ff67f65188ead20c633641052ac1b83eecd3a6bcb6ccfdd8b536dcd0b38bdd0e] <==
	I1027 20:00:23.122801       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 20:00:23.123075       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1027 20:00:23.123198       1 main.go:148] setting mtu 1500 for CNI 
	I1027 20:00:23.123217       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 20:00:23.123228       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T20:00:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 20:00:23.323730       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 20:00:23.323793       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 20:00:23.323811       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 20:00:23.324487       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1027 20:00:53.324585       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1027 20:00:53.324713       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1027 20:00:53.324808       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1027 20:00:53.325034       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1027 20:00:54.924075       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 20:00:54.924178       1 metrics.go:72] Registering metrics
	I1027 20:00:54.924295       1 controller.go:711] "Syncing nftables rules"
	I1027 20:01:03.328079       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1027 20:01:03.328129       1 main.go:301] handling current node
	I1027 20:01:13.324219       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1027 20:01:13.324260       1 main.go:301] handling current node
	
	
	==> kube-apiserver [29c3b98a52904e33a8078d400aaf159c81568a8384c7094727ab2ad5f6023451] <==
	I1027 20:00:13.890418       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1027 20:00:13.890521       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1027 20:00:13.910739       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 20:00:14.024552       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1027 20:00:14.087195       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 20:00:14.101796       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 20:00:14.103942       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 20:00:14.493491       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1027 20:00:14.518306       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1027 20:00:14.518586       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 20:00:15.650249       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 20:00:15.759332       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 20:00:15.872778       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1027 20:00:15.888258       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1027 20:00:15.892237       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 20:00:15.898080       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 20:00:16.336825       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 20:00:17.145528       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 20:00:17.163951       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1027 20:00:17.177147       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1027 20:00:21.942177       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1027 20:00:22.121247       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 20:00:22.426705       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 20:00:22.442131       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1027 20:01:14.521237       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:57786: use of closed network connection
	
	
	==> kube-controller-manager [00dc6770d1d5c3ed6b3d4df0db72a3ecd5f6eae735eeb76cc2047c5c216e3f27] <==
	I1027 20:00:21.293104       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1027 20:00:21.293111       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1027 20:00:21.297651       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1027 20:00:21.308103       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 20:00:21.308212       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1027 20:00:21.308280       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1027 20:00:21.309492       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1027 20:00:21.309573       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1027 20:00:21.310871       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1027 20:00:21.314515       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 20:00:21.316912       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-629838" podCIDRs=["10.244.0.0/24"]
	I1027 20:00:21.320608       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 20:00:21.331850       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 20:00:21.331959       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 20:00:21.331998       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1027 20:00:21.332012       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 20:00:21.332028       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1027 20:00:21.332160       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 20:00:21.332876       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1027 20:00:21.334132       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 20:00:21.335373       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1027 20:00:21.339086       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1027 20:00:21.340646       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 20:00:21.342427       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 20:01:06.287211       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [386343cb61580bf6a7406f268aa1df8d3b18dd654dbbabaf47577d3663259963] <==
	I1027 20:00:23.905598       1 server_linux.go:53] "Using iptables proxy"
	I1027 20:00:23.992534       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 20:00:24.093579       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 20:00:24.093612       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1027 20:00:24.093695       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 20:00:24.111819       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 20:00:24.111873       1 server_linux.go:132] "Using iptables Proxier"
	I1027 20:00:24.116175       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 20:00:24.116531       1 server.go:527] "Version info" version="v1.34.1"
	I1027 20:00:24.116593       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 20:00:24.119787       1 config.go:106] "Starting endpoint slice config controller"
	I1027 20:00:24.119814       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 20:00:24.120124       1 config.go:200] "Starting service config controller"
	I1027 20:00:24.120142       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 20:00:24.120465       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 20:00:24.123046       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 20:00:24.121176       1 config.go:309] "Starting node config controller"
	I1027 20:00:24.123180       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 20:00:24.123217       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 20:00:24.220282       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 20:00:24.220293       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 20:00:24.223190       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [7c0b1a1df8a67c3798d0fe2ea5b1740af2e288c42e7d504dba2e0f3c77c1bd98] <==
	I1027 20:00:13.181837       1 serving.go:386] Generated self-signed cert in-memory
	I1027 20:00:16.265962       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 20:00:16.266056       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 20:00:16.273776       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 20:00:16.275962       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 20:00:16.275991       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1027 20:00:16.276256       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1027 20:00:16.276013       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 20:00:16.297192       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 20:00:16.276022       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 20:00:16.297271       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 20:00:16.376372       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1027 20:00:16.404929       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 20:00:16.404993       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 20:00:22 embed-certs-629838 kubelet[1315]: I1027 20:00:22.138401    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/26bd03b9-d7d1-42d3-9a0f-57a0079df4df-lib-modules\") pod \"kindnet-cfqpk\" (UID: \"26bd03b9-d7d1-42d3-9a0f-57a0079df4df\") " pod="kube-system/kindnet-cfqpk"
	Oct 27 20:00:22 embed-certs-629838 kubelet[1315]: I1027 20:00:22.138426    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5csp\" (UniqueName: \"kubernetes.io/projected/26bd03b9-d7d1-42d3-9a0f-57a0079df4df-kube-api-access-q5csp\") pod \"kindnet-cfqpk\" (UID: \"26bd03b9-d7d1-42d3-9a0f-57a0079df4df\") " pod="kube-system/kindnet-cfqpk"
	Oct 27 20:00:22 embed-certs-629838 kubelet[1315]: I1027 20:00:22.138450    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/26bd03b9-d7d1-42d3-9a0f-57a0079df4df-xtables-lock\") pod \"kindnet-cfqpk\" (UID: \"26bd03b9-d7d1-42d3-9a0f-57a0079df4df\") " pod="kube-system/kindnet-cfqpk"
	Oct 27 20:00:22 embed-certs-629838 kubelet[1315]: E1027 20:00:22.224495    1315 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 27 20:00:22 embed-certs-629838 kubelet[1315]: E1027 20:00:22.224528    1315 projected.go:196] Error preparing data for projected volume kube-api-access-xbzs6 for pod kube-system/kube-proxy-bwql6: configmap "kube-root-ca.crt" not found
	Oct 27 20:00:22 embed-certs-629838 kubelet[1315]: E1027 20:00:22.224649    1315 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/41eb9367-187c-4241-967f-46d7e5ff9003-kube-api-access-xbzs6 podName:41eb9367-187c-4241-967f-46d7e5ff9003 nodeName:}" failed. No retries permitted until 2025-10-27 20:00:22.724607238 +0000 UTC m=+5.756293412 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xbzs6" (UniqueName: "kubernetes.io/projected/41eb9367-187c-4241-967f-46d7e5ff9003-kube-api-access-xbzs6") pod "kube-proxy-bwql6" (UID: "41eb9367-187c-4241-967f-46d7e5ff9003") : configmap "kube-root-ca.crt" not found
	Oct 27 20:00:22 embed-certs-629838 kubelet[1315]: E1027 20:00:22.338189    1315 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Oct 27 20:00:22 embed-certs-629838 kubelet[1315]: E1027 20:00:22.338231    1315 projected.go:196] Error preparing data for projected volume kube-api-access-q5csp for pod kube-system/kindnet-cfqpk: configmap "kube-root-ca.crt" not found
	Oct 27 20:00:22 embed-certs-629838 kubelet[1315]: E1027 20:00:22.338294    1315 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/26bd03b9-d7d1-42d3-9a0f-57a0079df4df-kube-api-access-q5csp podName:26bd03b9-d7d1-42d3-9a0f-57a0079df4df nodeName:}" failed. No retries permitted until 2025-10-27 20:00:22.83827531 +0000 UTC m=+5.869961492 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-q5csp" (UniqueName: "kubernetes.io/projected/26bd03b9-d7d1-42d3-9a0f-57a0079df4df-kube-api-access-q5csp") pod "kindnet-cfqpk" (UID: "26bd03b9-d7d1-42d3-9a0f-57a0079df4df") : configmap "kube-root-ca.crt" not found
	Oct 27 20:00:22 embed-certs-629838 kubelet[1315]: I1027 20:00:22.747637    1315 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 27 20:00:23 embed-certs-629838 kubelet[1315]: E1027 20:00:23.139631    1315 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Oct 27 20:00:23 embed-certs-629838 kubelet[1315]: E1027 20:00:23.139754    1315 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41eb9367-187c-4241-967f-46d7e5ff9003-kube-proxy podName:41eb9367-187c-4241-967f-46d7e5ff9003 nodeName:}" failed. No retries permitted until 2025-10-27 20:00:23.639731361 +0000 UTC m=+6.671417535 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/41eb9367-187c-4241-967f-46d7e5ff9003-kube-proxy") pod "kube-proxy-bwql6" (UID: "41eb9367-187c-4241-967f-46d7e5ff9003") : failed to sync configmap cache: timed out waiting for the condition
	Oct 27 20:00:23 embed-certs-629838 kubelet[1315]: W1027 20:00:23.821676    1315 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c4f57eb9d97cb78db15678f52105eae7c06964f89c91a987456d1c3d7cf90fa0/crio-78c89e30129888e9c3d58f7c7a96d9fc8672b38417afed86333757aa2def33af WatchSource:0}: Error finding container 78c89e30129888e9c3d58f7c7a96d9fc8672b38417afed86333757aa2def33af: Status 404 returned error can't find the container with id 78c89e30129888e9c3d58f7c7a96d9fc8672b38417afed86333757aa2def33af
	Oct 27 20:00:24 embed-certs-629838 kubelet[1315]: I1027 20:00:24.212444    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-cfqpk" podStartSLOduration=3.212425067 podStartE2EDuration="3.212425067s" podCreationTimestamp="2025-10-27 20:00:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 20:00:23.226040572 +0000 UTC m=+6.257726746" watchObservedRunningTime="2025-10-27 20:00:24.212425067 +0000 UTC m=+7.244111249"
	Oct 27 20:00:27 embed-certs-629838 kubelet[1315]: I1027 20:00:27.064196    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bwql6" podStartSLOduration=6.064164028 podStartE2EDuration="6.064164028s" podCreationTimestamp="2025-10-27 20:00:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 20:00:24.213166937 +0000 UTC m=+7.244853119" watchObservedRunningTime="2025-10-27 20:00:27.064164028 +0000 UTC m=+10.095850210"
	Oct 27 20:01:03 embed-certs-629838 kubelet[1315]: I1027 20:01:03.600562    1315 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 27 20:01:03 embed-certs-629838 kubelet[1315]: I1027 20:01:03.777924    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31b0e0f4-af1b-40c7-9b20-0941025a0e20-config-volume\") pod \"coredns-66bc5c9577-ch8jv\" (UID: \"31b0e0f4-af1b-40c7-9b20-0941025a0e20\") " pod="kube-system/coredns-66bc5c9577-ch8jv"
	Oct 27 20:01:03 embed-certs-629838 kubelet[1315]: I1027 20:01:03.778142    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9969\" (UniqueName: \"kubernetes.io/projected/31b0e0f4-af1b-40c7-9b20-0941025a0e20-kube-api-access-f9969\") pod \"coredns-66bc5c9577-ch8jv\" (UID: \"31b0e0f4-af1b-40c7-9b20-0941025a0e20\") " pod="kube-system/coredns-66bc5c9577-ch8jv"
	Oct 27 20:01:03 embed-certs-629838 kubelet[1315]: I1027 20:01:03.778267    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/39cc3c46-ef65-4c2e-8c82-6273f639f702-tmp\") pod \"storage-provisioner\" (UID: \"39cc3c46-ef65-4c2e-8c82-6273f639f702\") " pod="kube-system/storage-provisioner"
	Oct 27 20:01:03 embed-certs-629838 kubelet[1315]: I1027 20:01:03.778361    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jlq6\" (UniqueName: \"kubernetes.io/projected/39cc3c46-ef65-4c2e-8c82-6273f639f702-kube-api-access-4jlq6\") pod \"storage-provisioner\" (UID: \"39cc3c46-ef65-4c2e-8c82-6273f639f702\") " pod="kube-system/storage-provisioner"
	Oct 27 20:01:03 embed-certs-629838 kubelet[1315]: W1027 20:01:03.998148    1315 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c4f57eb9d97cb78db15678f52105eae7c06964f89c91a987456d1c3d7cf90fa0/crio-8acdb199259cb1acafc51dd8899d05b09e2f9d20b4a71697e1a0eae3f5d8b814 WatchSource:0}: Error finding container 8acdb199259cb1acafc51dd8899d05b09e2f9d20b4a71697e1a0eae3f5d8b814: Status 404 returned error can't find the container with id 8acdb199259cb1acafc51dd8899d05b09e2f9d20b4a71697e1a0eae3f5d8b814
	Oct 27 20:01:04 embed-certs-629838 kubelet[1315]: I1027 20:01:04.329865    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.329845717 podStartE2EDuration="42.329845717s" podCreationTimestamp="2025-10-27 20:00:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 20:01:04.309805606 +0000 UTC m=+47.341491805" watchObservedRunningTime="2025-10-27 20:01:04.329845717 +0000 UTC m=+47.361531899"
	Oct 27 20:01:06 embed-certs-629838 kubelet[1315]: I1027 20:01:06.412800    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-ch8jv" podStartSLOduration=44.412780156 podStartE2EDuration="44.412780156s" podCreationTimestamp="2025-10-27 20:00:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 20:01:04.339311387 +0000 UTC m=+47.370997561" watchObservedRunningTime="2025-10-27 20:01:06.412780156 +0000 UTC m=+49.444466330"
	Oct 27 20:01:06 embed-certs-629838 kubelet[1315]: I1027 20:01:06.510569    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fstl8\" (UniqueName: \"kubernetes.io/projected/00b9d871-3c8b-42a7-9c24-e1ac939805c4-kube-api-access-fstl8\") pod \"busybox\" (UID: \"00b9d871-3c8b-42a7-9c24-e1ac939805c4\") " pod="default/busybox"
	Oct 27 20:01:06 embed-certs-629838 kubelet[1315]: W1027 20:01:06.743444    1315 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c4f57eb9d97cb78db15678f52105eae7c06964f89c91a987456d1c3d7cf90fa0/crio-298897d9409531a4fe7b13bb386e609f107c511931266fea666f30021336af1e WatchSource:0}: Error finding container 298897d9409531a4fe7b13bb386e609f107c511931266fea666f30021336af1e: Status 404 returned error can't find the container with id 298897d9409531a4fe7b13bb386e609f107c511931266fea666f30021336af1e
	
	
	==> storage-provisioner [f38e8f3451591b8212d5b0b63cc7d2f3fa14f1afc6247344bad12b816f44221c] <==
	I1027 20:01:04.091740       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1027 20:01:04.167061       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 20:01:04.167118       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1027 20:01:04.192326       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:04.203428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 20:01:04.203693       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 20:01:04.203937       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-629838_3bf4dfd9-b8b1-44a6-b90f-28d3d67cabf6!
	I1027 20:01:04.204022       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2ebd0f97-21e2-431a-a333-48d0485c417f", APIVersion:"v1", ResourceVersion:"464", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-629838_3bf4dfd9-b8b1-44a6-b90f-28d3d67cabf6 became leader
	W1027 20:01:04.216225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:04.226241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 20:01:04.304552       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-629838_3bf4dfd9-b8b1-44a6-b90f-28d3d67cabf6!
	W1027 20:01:06.230972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:06.243509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:08.246889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:08.253602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:10.256504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:10.260555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:12.264114       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:12.268180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:14.271769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:14.276902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:16.280998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:16.286292       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-629838 -n embed-certs-629838
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-629838 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-300878 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-300878 --alsologtostderr -v=1: exit status 80 (2.010027984s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-300878 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 20:01:48.248624  464997 out.go:360] Setting OutFile to fd 1 ...
	I1027 20:01:48.248907  464997 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:01:48.248947  464997 out.go:374] Setting ErrFile to fd 2...
	I1027 20:01:48.248969  464997 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:01:48.249317  464997 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 20:01:48.249623  464997 out.go:368] Setting JSON to false
	I1027 20:01:48.249674  464997 mustload.go:65] Loading cluster: no-preload-300878
	I1027 20:01:48.250096  464997 config.go:182] Loaded profile config "no-preload-300878": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:01:48.250604  464997 cli_runner.go:164] Run: docker container inspect no-preload-300878 --format={{.State.Status}}
	I1027 20:01:48.276568  464997 host.go:66] Checking if "no-preload-300878" exists ...
	I1027 20:01:48.276886  464997 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 20:01:48.394523  464997 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-27 20:01:48.385222786 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 20:01:48.395188  464997 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-300878 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1027 20:01:48.403585  464997 out.go:179] * Pausing node no-preload-300878 ... 
	I1027 20:01:48.407165  464997 host.go:66] Checking if "no-preload-300878" exists ...
	I1027 20:01:48.407539  464997 ssh_runner.go:195] Run: systemctl --version
	I1027 20:01:48.407596  464997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-300878
	I1027 20:01:48.437656  464997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/no-preload-300878/id_rsa Username:docker}
	I1027 20:01:48.554570  464997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 20:01:48.571502  464997 pause.go:52] kubelet running: true
	I1027 20:01:48.571567  464997 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 20:01:48.918437  464997 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 20:01:48.918529  464997 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 20:01:49.039760  464997 cri.go:89] found id: "b445fba8c8ac67334825dd1de6351eea9e0bc3f983f24bfdffca5c882248e749"
	I1027 20:01:49.039783  464997 cri.go:89] found id: "e2edb66752f0320ca15324996f37b99874c7495a2aef8abe85781a5d7bfa18cf"
	I1027 20:01:49.039789  464997 cri.go:89] found id: "ce4b2f0831d4b6d80de7c02268477bd3775a8d23bb376b0d95e7a73ee6e7f12f"
	I1027 20:01:49.039793  464997 cri.go:89] found id: "3627ed707f2d56dfd79e2f7904b8af77c14c72df05c03340c5194af8a728a9c5"
	I1027 20:01:49.039796  464997 cri.go:89] found id: "dab65465dca396808c706937f4961cc55e7b5490a396435f7a5ce712e477451c"
	I1027 20:01:49.039801  464997 cri.go:89] found id: "13d030edd8243f160331166818aa22d925ebe9d23d02c061f90430ac5760b9ea"
	I1027 20:01:49.039804  464997 cri.go:89] found id: "e280576b7cd349242f75cfabe39f57b429b7f1be59a692085c1ac72054e39d40"
	I1027 20:01:49.039808  464997 cri.go:89] found id: "2e749e0d2383f2957caa1a338e460f11d3d14d875b53417f4e0cd2479aad76e0"
	I1027 20:01:49.039813  464997 cri.go:89] found id: "75c7134600c5c012795172e75d3a8a7ce7dfa5f7d06a557943649171a009abd6"
	I1027 20:01:49.039821  464997 cri.go:89] found id: "ff4d1e7aa0afdad72f13361ebbf1d1951f4d0389c1da1547bb8463d6cbfc6973"
	I1027 20:01:49.039825  464997 cri.go:89] found id: "845ed893ecacdcea3dadb975971aab3db09c283ebdfc1d1660e45965d8599714"
	I1027 20:01:49.039828  464997 cri.go:89] found id: ""
	I1027 20:01:49.039875  464997 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 20:01:49.056633  464997 retry.go:31] will retry after 154.267762ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T20:01:49Z" level=error msg="open /run/runc: no such file or directory"
	I1027 20:01:49.211987  464997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 20:01:49.229238  464997 pause.go:52] kubelet running: false
	I1027 20:01:49.229313  464997 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 20:01:49.455016  464997 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 20:01:49.455098  464997 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 20:01:49.550048  464997 cri.go:89] found id: "b445fba8c8ac67334825dd1de6351eea9e0bc3f983f24bfdffca5c882248e749"
	I1027 20:01:49.550081  464997 cri.go:89] found id: "e2edb66752f0320ca15324996f37b99874c7495a2aef8abe85781a5d7bfa18cf"
	I1027 20:01:49.550087  464997 cri.go:89] found id: "ce4b2f0831d4b6d80de7c02268477bd3775a8d23bb376b0d95e7a73ee6e7f12f"
	I1027 20:01:49.550091  464997 cri.go:89] found id: "3627ed707f2d56dfd79e2f7904b8af77c14c72df05c03340c5194af8a728a9c5"
	I1027 20:01:49.550094  464997 cri.go:89] found id: "dab65465dca396808c706937f4961cc55e7b5490a396435f7a5ce712e477451c"
	I1027 20:01:49.550098  464997 cri.go:89] found id: "13d030edd8243f160331166818aa22d925ebe9d23d02c061f90430ac5760b9ea"
	I1027 20:01:49.550101  464997 cri.go:89] found id: "e280576b7cd349242f75cfabe39f57b429b7f1be59a692085c1ac72054e39d40"
	I1027 20:01:49.550104  464997 cri.go:89] found id: "2e749e0d2383f2957caa1a338e460f11d3d14d875b53417f4e0cd2479aad76e0"
	I1027 20:01:49.550108  464997 cri.go:89] found id: "75c7134600c5c012795172e75d3a8a7ce7dfa5f7d06a557943649171a009abd6"
	I1027 20:01:49.550114  464997 cri.go:89] found id: "ff4d1e7aa0afdad72f13361ebbf1d1951f4d0389c1da1547bb8463d6cbfc6973"
	I1027 20:01:49.550120  464997 cri.go:89] found id: "845ed893ecacdcea3dadb975971aab3db09c283ebdfc1d1660e45965d8599714"
	I1027 20:01:49.550137  464997 cri.go:89] found id: ""
	I1027 20:01:49.550198  464997 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 20:01:49.569045  464997 retry.go:31] will retry after 212.75785ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T20:01:49Z" level=error msg="open /run/runc: no such file or directory"
	I1027 20:01:49.782468  464997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 20:01:49.796461  464997 pause.go:52] kubelet running: false
	I1027 20:01:49.796550  464997 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 20:01:50.030324  464997 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 20:01:50.030420  464997 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 20:01:50.138774  464997 cri.go:89] found id: "b445fba8c8ac67334825dd1de6351eea9e0bc3f983f24bfdffca5c882248e749"
	I1027 20:01:50.138794  464997 cri.go:89] found id: "e2edb66752f0320ca15324996f37b99874c7495a2aef8abe85781a5d7bfa18cf"
	I1027 20:01:50.138799  464997 cri.go:89] found id: "ce4b2f0831d4b6d80de7c02268477bd3775a8d23bb376b0d95e7a73ee6e7f12f"
	I1027 20:01:50.138803  464997 cri.go:89] found id: "3627ed707f2d56dfd79e2f7904b8af77c14c72df05c03340c5194af8a728a9c5"
	I1027 20:01:50.138807  464997 cri.go:89] found id: "dab65465dca396808c706937f4961cc55e7b5490a396435f7a5ce712e477451c"
	I1027 20:01:50.138810  464997 cri.go:89] found id: "13d030edd8243f160331166818aa22d925ebe9d23d02c061f90430ac5760b9ea"
	I1027 20:01:50.138813  464997 cri.go:89] found id: "e280576b7cd349242f75cfabe39f57b429b7f1be59a692085c1ac72054e39d40"
	I1027 20:01:50.138817  464997 cri.go:89] found id: "2e749e0d2383f2957caa1a338e460f11d3d14d875b53417f4e0cd2479aad76e0"
	I1027 20:01:50.138821  464997 cri.go:89] found id: "75c7134600c5c012795172e75d3a8a7ce7dfa5f7d06a557943649171a009abd6"
	I1027 20:01:50.138836  464997 cri.go:89] found id: "ff4d1e7aa0afdad72f13361ebbf1d1951f4d0389c1da1547bb8463d6cbfc6973"
	I1027 20:01:50.138840  464997 cri.go:89] found id: "845ed893ecacdcea3dadb975971aab3db09c283ebdfc1d1660e45965d8599714"
	I1027 20:01:50.138843  464997 cri.go:89] found id: ""
	I1027 20:01:50.138890  464997 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 20:01:50.160255  464997 out.go:203] 
	W1027 20:01:50.163935  464997 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T20:01:50Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T20:01:50Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 20:01:50.163956  464997 out.go:285] * 
	* 
	W1027 20:01:50.172566  464997 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 20:01:50.177402  464997 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-300878 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-300878
helpers_test.go:243: (dbg) docker inspect no-preload-300878:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5f7533431bd6ad05927fc6ee4ffbe127ba4e775919d072603a24e3cd4e1d5d89",
	        "Created": "2025-10-27T19:59:03.085735227Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 460174,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T20:00:43.524296602Z",
	            "FinishedAt": "2025-10-27T20:00:42.71806872Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/5f7533431bd6ad05927fc6ee4ffbe127ba4e775919d072603a24e3cd4e1d5d89/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5f7533431bd6ad05927fc6ee4ffbe127ba4e775919d072603a24e3cd4e1d5d89/hostname",
	        "HostsPath": "/var/lib/docker/containers/5f7533431bd6ad05927fc6ee4ffbe127ba4e775919d072603a24e3cd4e1d5d89/hosts",
	        "LogPath": "/var/lib/docker/containers/5f7533431bd6ad05927fc6ee4ffbe127ba4e775919d072603a24e3cd4e1d5d89/5f7533431bd6ad05927fc6ee4ffbe127ba4e775919d072603a24e3cd4e1d5d89-json.log",
	        "Name": "/no-preload-300878",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-300878:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-300878",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5f7533431bd6ad05927fc6ee4ffbe127ba4e775919d072603a24e3cd4e1d5d89",
	                "LowerDir": "/var/lib/docker/overlay2/9bbd17e120a72b16cae429750c48e4e412848fa0a221daef03291fd6af1df13e-init/diff:/var/lib/docker/overlay2/0567218bbe05e8b59b2aef7ad82032d6716040c1f9d9d73e7de8682101079f76/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9bbd17e120a72b16cae429750c48e4e412848fa0a221daef03291fd6af1df13e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9bbd17e120a72b16cae429750c48e4e412848fa0a221daef03291fd6af1df13e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9bbd17e120a72b16cae429750c48e4e412848fa0a221daef03291fd6af1df13e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-300878",
	                "Source": "/var/lib/docker/volumes/no-preload-300878/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-300878",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-300878",
	                "name.minikube.sigs.k8s.io": "no-preload-300878",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5f9fbf419a0051093d15a299a2c10a6faca066a07d1ce0492aebf175c3c99c37",
	            "SandboxKey": "/var/run/docker/netns/5f9fbf419a00",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-300878": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:82:b2:cc:e9:a9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "12fd71527f5be91d352c6fcacb328f609f1124632115c17524de411b48d37139",
	                    "EndpointID": "974cb468a3748c0de9246729814a896308acbd568be5371b60ecb575135f1f15",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-300878",
	                        "5f7533431bd6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-300878 -n no-preload-300878
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-300878 -n no-preload-300878: exit status 2 (458.647712ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-300878 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-300878 logs -n 25: (1.808509593s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cert-options-319273 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-319273    │ jenkins │ v1.37.0 │ 27 Oct 25 19:56 UTC │ 27 Oct 25 19:56 UTC │
	│ delete  │ -p cert-options-319273                                                                                                                                                                                                                        │ cert-options-319273    │ jenkins │ v1.37.0 │ 27 Oct 25 19:56 UTC │ 27 Oct 25 19:56 UTC │
	│ start   │ -p old-k8s-version-942644 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-942644 │ jenkins │ v1.37.0 │ 27 Oct 25 19:56 UTC │ 27 Oct 25 19:57 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-942644 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-942644 │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │                     │
	│ stop    │ -p old-k8s-version-942644 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-942644 │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │ 27 Oct 25 19:58 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-942644 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-942644 │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │ 27 Oct 25 19:58 UTC │
	│ start   │ -p old-k8s-version-942644 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-942644 │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │ 27 Oct 25 19:59 UTC │
	│ start   │ -p cert-expiration-280013 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-280013 │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │ 27 Oct 25 19:58 UTC │
	│ delete  │ -p cert-expiration-280013                                                                                                                                                                                                                     │ cert-expiration-280013 │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │ 27 Oct 25 19:59 UTC │
	│ start   │ -p no-preload-300878 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-300878      │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │ 27 Oct 25 20:00 UTC │
	│ image   │ old-k8s-version-942644 image list --format=json                                                                                                                                                                                               │ old-k8s-version-942644 │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │ 27 Oct 25 19:59 UTC │
	│ pause   │ -p old-k8s-version-942644 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-942644 │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │                     │
	│ delete  │ -p old-k8s-version-942644                                                                                                                                                                                                                     │ old-k8s-version-942644 │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │ 27 Oct 25 19:59 UTC │
	│ delete  │ -p old-k8s-version-942644                                                                                                                                                                                                                     │ old-k8s-version-942644 │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │ 27 Oct 25 19:59 UTC │
	│ start   │ -p embed-certs-629838 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-629838     │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │ 27 Oct 25 20:01 UTC │
	│ addons  │ enable metrics-server -p no-preload-300878 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-300878      │ jenkins │ v1.37.0 │ 27 Oct 25 20:00 UTC │                     │
	│ stop    │ -p no-preload-300878 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-300878      │ jenkins │ v1.37.0 │ 27 Oct 25 20:00 UTC │ 27 Oct 25 20:00 UTC │
	│ addons  │ enable dashboard -p no-preload-300878 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-300878      │ jenkins │ v1.37.0 │ 27 Oct 25 20:00 UTC │ 27 Oct 25 20:00 UTC │
	│ start   │ -p no-preload-300878 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-300878      │ jenkins │ v1.37.0 │ 27 Oct 25 20:00 UTC │ 27 Oct 25 20:01 UTC │
	│ addons  │ enable metrics-server -p embed-certs-629838 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-629838     │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │                     │
	│ stop    │ -p embed-certs-629838 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-629838     │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:01 UTC │
	│ addons  │ enable dashboard -p embed-certs-629838 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-629838     │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:01 UTC │
	│ start   │ -p embed-certs-629838 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-629838     │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │                     │
	│ image   │ no-preload-300878 image list --format=json                                                                                                                                                                                                    │ no-preload-300878      │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:01 UTC │
	│ pause   │ -p no-preload-300878 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-300878      │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 20:01:29
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 20:01:29.265783  462995 out.go:360] Setting OutFile to fd 1 ...
	I1027 20:01:29.266115  462995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:01:29.266155  462995 out.go:374] Setting ErrFile to fd 2...
	I1027 20:01:29.266175  462995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:01:29.266457  462995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 20:01:29.266870  462995 out.go:368] Setting JSON to false
	I1027 20:01:29.267913  462995 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9842,"bootTime":1761585448,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1027 20:01:29.268015  462995 start.go:141] virtualization:  
	I1027 20:01:29.271120  462995 out.go:179] * [embed-certs-629838] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 20:01:29.275036  462995 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 20:01:29.275162  462995 notify.go:220] Checking for updates...
	I1027 20:01:29.280975  462995 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 20:01:29.283884  462995 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 20:01:29.286727  462995 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-266035/.minikube
	I1027 20:01:29.289529  462995 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 20:01:29.292376  462995 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 20:01:29.295726  462995 config.go:182] Loaded profile config "embed-certs-629838": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:01:29.296331  462995 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 20:01:29.332066  462995 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 20:01:29.332240  462995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 20:01:29.399528  462995 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-27 20:01:29.389124253 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 20:01:29.399638  462995 docker.go:318] overlay module found
	I1027 20:01:29.402928  462995 out.go:179] * Using the docker driver based on existing profile
	I1027 20:01:29.406087  462995 start.go:305] selected driver: docker
	I1027 20:01:29.406108  462995 start.go:925] validating driver "docker" against &{Name:embed-certs-629838 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-629838 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 20:01:29.406218  462995 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 20:01:29.407084  462995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 20:01:29.457679  462995 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-27 20:01:29.448361319 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 20:01:29.458014  462995 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 20:01:29.458049  462995 cni.go:84] Creating CNI manager for ""
	I1027 20:01:29.458113  462995 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 20:01:29.458155  462995 start.go:349] cluster config:
	{Name:embed-certs-629838 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-629838 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 20:01:29.461395  462995 out.go:179] * Starting "embed-certs-629838" primary control-plane node in "embed-certs-629838" cluster
	I1027 20:01:29.464174  462995 cache.go:123] Beginning downloading kic base image for docker with crio
	I1027 20:01:29.467092  462995 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 20:01:29.469852  462995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 20:01:29.469903  462995 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1027 20:01:29.469914  462995 cache.go:58] Caching tarball of preloaded images
	I1027 20:01:29.469945  462995 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 20:01:29.470005  462995 preload.go:233] Found /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1027 20:01:29.470015  462995 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 20:01:29.470127  462995 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/config.json ...
	I1027 20:01:29.489141  462995 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 20:01:29.489162  462995 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 20:01:29.489180  462995 cache.go:232] Successfully downloaded all kic artifacts
	I1027 20:01:29.489202  462995 start.go:360] acquireMachinesLock for embed-certs-629838: {Name:mk8675e8c935af9c23da71750794b4a71f97e11f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 20:01:29.489258  462995 start.go:364] duration metric: took 39.023µs to acquireMachinesLock for "embed-certs-629838"
	I1027 20:01:29.489276  462995 start.go:96] Skipping create...Using existing machine configuration
	I1027 20:01:29.489282  462995 fix.go:54] fixHost starting: 
	I1027 20:01:29.489533  462995 cli_runner.go:164] Run: docker container inspect embed-certs-629838 --format={{.State.Status}}
	I1027 20:01:29.507387  462995 fix.go:112] recreateIfNeeded on embed-certs-629838: state=Stopped err=<nil>
	W1027 20:01:29.507423  462995 fix.go:138] unexpected machine state, will restart: <nil>
	W1027 20:01:29.252716  460048 pod_ready.go:104] pod "coredns-66bc5c9577-jlg4z" is not "Ready", error: <nil>
	W1027 20:01:31.256342  460048 pod_ready.go:104] pod "coredns-66bc5c9577-jlg4z" is not "Ready", error: <nil>
	I1027 20:01:29.510561  462995 out.go:252] * Restarting existing docker container for "embed-certs-629838" ...
	I1027 20:01:29.510660  462995 cli_runner.go:164] Run: docker start embed-certs-629838
	I1027 20:01:29.763801  462995 cli_runner.go:164] Run: docker container inspect embed-certs-629838 --format={{.State.Status}}
	I1027 20:01:29.786256  462995 kic.go:430] container "embed-certs-629838" state is running.
	I1027 20:01:29.786764  462995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-629838
	I1027 20:01:29.810976  462995 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/config.json ...
	I1027 20:01:29.811244  462995 machine.go:93] provisionDockerMachine start ...
	I1027 20:01:29.811303  462995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629838
	I1027 20:01:29.836016  462995 main.go:141] libmachine: Using SSH client type: native
	I1027 20:01:29.836463  462995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1027 20:01:29.836477  462995 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 20:01:29.837479  462995 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1027 20:01:32.986640  462995 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-629838
	
	I1027 20:01:32.986669  462995 ubuntu.go:182] provisioning hostname "embed-certs-629838"
	I1027 20:01:32.986738  462995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629838
	I1027 20:01:33.015247  462995 main.go:141] libmachine: Using SSH client type: native
	I1027 20:01:33.015580  462995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1027 20:01:33.015599  462995 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-629838 && echo "embed-certs-629838" | sudo tee /etc/hostname
	I1027 20:01:33.172637  462995 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-629838
	
	I1027 20:01:33.172746  462995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629838
	I1027 20:01:33.199099  462995 main.go:141] libmachine: Using SSH client type: native
	I1027 20:01:33.199412  462995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1027 20:01:33.199436  462995 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-629838' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-629838/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-629838' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 20:01:33.363483  462995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 20:01:33.363574  462995 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21801-266035/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-266035/.minikube}
	I1027 20:01:33.363638  462995 ubuntu.go:190] setting up certificates
	I1027 20:01:33.363679  462995 provision.go:84] configureAuth start
	I1027 20:01:33.363769  462995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-629838
	I1027 20:01:33.380639  462995 provision.go:143] copyHostCerts
	I1027 20:01:33.380711  462995 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem, removing ...
	I1027 20:01:33.380733  462995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem
	I1027 20:01:33.380818  462995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem (1123 bytes)
	I1027 20:01:33.380920  462995 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem, removing ...
	I1027 20:01:33.380931  462995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem
	I1027 20:01:33.380959  462995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem (1675 bytes)
	I1027 20:01:33.381020  462995 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem, removing ...
	I1027 20:01:33.381029  462995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem
	I1027 20:01:33.381054  462995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem (1078 bytes)
	I1027 20:01:33.381110  462995 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem org=jenkins.embed-certs-629838 san=[127.0.0.1 192.168.76.2 embed-certs-629838 localhost minikube]
	I1027 20:01:33.922784  462995 provision.go:177] copyRemoteCerts
	I1027 20:01:33.922866  462995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 20:01:33.922917  462995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629838
	I1027 20:01:33.941193  462995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/embed-certs-629838/id_rsa Username:docker}
	I1027 20:01:34.051601  462995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 20:01:34.073061  462995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 20:01:34.092112  462995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1027 20:01:34.110620  462995 provision.go:87] duration metric: took 746.901886ms to configureAuth
	I1027 20:01:34.110702  462995 ubuntu.go:206] setting minikube options for container-runtime
	I1027 20:01:34.110936  462995 config.go:182] Loaded profile config "embed-certs-629838": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:01:34.111132  462995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629838
	I1027 20:01:34.130041  462995 main.go:141] libmachine: Using SSH client type: native
	I1027 20:01:34.130356  462995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1027 20:01:34.130370  462995 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 20:01:34.461739  462995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 20:01:34.461759  462995 machine.go:96] duration metric: took 4.650505847s to provisionDockerMachine
	I1027 20:01:34.461769  462995 start.go:293] postStartSetup for "embed-certs-629838" (driver="docker")
	I1027 20:01:34.461780  462995 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 20:01:34.461855  462995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 20:01:34.461895  462995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629838
	I1027 20:01:34.483779  462995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/embed-certs-629838/id_rsa Username:docker}
	I1027 20:01:34.598205  462995 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 20:01:34.604921  462995 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 20:01:34.604949  462995 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 20:01:34.604960  462995 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-266035/.minikube/addons for local assets ...
	I1027 20:01:34.605016  462995 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-266035/.minikube/files for local assets ...
	I1027 20:01:34.605109  462995 filesync.go:149] local asset: /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem -> 2678802.pem in /etc/ssl/certs
	I1027 20:01:34.605213  462995 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 20:01:34.616959  462995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem --> /etc/ssl/certs/2678802.pem (1708 bytes)
	I1027 20:01:34.645990  462995 start.go:296] duration metric: took 184.205409ms for postStartSetup
	I1027 20:01:34.646145  462995 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 20:01:34.646218  462995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629838
	I1027 20:01:34.664079  462995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/embed-certs-629838/id_rsa Username:docker}
	I1027 20:01:34.771013  462995 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 20:01:34.776626  462995 fix.go:56] duration metric: took 5.28733534s for fixHost
	I1027 20:01:34.776648  462995 start.go:83] releasing machines lock for "embed-certs-629838", held for 5.287381961s
	I1027 20:01:34.776725  462995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-629838
	I1027 20:01:34.793484  462995 ssh_runner.go:195] Run: cat /version.json
	I1027 20:01:34.793533  462995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629838
	I1027 20:01:34.793875  462995 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 20:01:34.793933  462995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629838
	I1027 20:01:34.816324  462995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/embed-certs-629838/id_rsa Username:docker}
	I1027 20:01:34.824624  462995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/embed-certs-629838/id_rsa Username:docker}
	I1027 20:01:34.918894  462995 ssh_runner.go:195] Run: systemctl --version
	I1027 20:01:35.018128  462995 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 20:01:35.066804  462995 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 20:01:35.072709  462995 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 20:01:35.072891  462995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 20:01:35.082744  462995 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 20:01:35.082771  462995 start.go:495] detecting cgroup driver to use...
	I1027 20:01:35.082807  462995 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1027 20:01:35.082872  462995 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 20:01:35.099518  462995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 20:01:35.113387  462995 docker.go:218] disabling cri-docker service (if available) ...
	I1027 20:01:35.113485  462995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 20:01:35.130167  462995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 20:01:35.144237  462995 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 20:01:35.268401  462995 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 20:01:35.388675  462995 docker.go:234] disabling docker service ...
	I1027 20:01:35.388741  462995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 20:01:35.404531  462995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 20:01:35.417444  462995 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 20:01:35.543939  462995 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 20:01:35.667500  462995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 20:01:35.680475  462995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 20:01:35.695906  462995 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 20:01:35.696021  462995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:01:35.705823  462995 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 20:01:35.705943  462995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:01:35.715910  462995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:01:35.725338  462995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:01:35.734556  462995 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 20:01:35.744674  462995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:01:35.755290  462995 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:01:35.764120  462995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:01:35.772893  462995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 20:01:35.780449  462995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 20:01:35.787919  462995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:01:35.914540  462995 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 20:01:36.056783  462995 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 20:01:36.056934  462995 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 20:01:36.061391  462995 start.go:563] Will wait 60s for crictl version
	I1027 20:01:36.061472  462995 ssh_runner.go:195] Run: which crictl
	I1027 20:01:36.065865  462995 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 20:01:36.097719  462995 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 20:01:36.097820  462995 ssh_runner.go:195] Run: crio --version
	I1027 20:01:36.130037  462995 ssh_runner.go:195] Run: crio --version
	I1027 20:01:36.175848  462995 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1027 20:01:33.755107  460048 pod_ready.go:104] pod "coredns-66bc5c9577-jlg4z" is not "Ready", error: <nil>
	I1027 20:01:34.752002  460048 pod_ready.go:94] pod "coredns-66bc5c9577-jlg4z" is "Ready"
	I1027 20:01:34.752028  460048 pod_ready.go:86] duration metric: took 37.505469588s for pod "coredns-66bc5c9577-jlg4z" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:01:34.755657  460048 pod_ready.go:83] waiting for pod "etcd-no-preload-300878" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:01:34.760043  460048 pod_ready.go:94] pod "etcd-no-preload-300878" is "Ready"
	I1027 20:01:34.760070  460048 pod_ready.go:86] duration metric: took 4.387643ms for pod "etcd-no-preload-300878" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:01:34.762162  460048 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-300878" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:01:34.766509  460048 pod_ready.go:94] pod "kube-apiserver-no-preload-300878" is "Ready"
	I1027 20:01:34.766537  460048 pod_ready.go:86] duration metric: took 4.348784ms for pod "kube-apiserver-no-preload-300878" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:01:34.768891  460048 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-300878" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:01:34.950790  460048 pod_ready.go:94] pod "kube-controller-manager-no-preload-300878" is "Ready"
	I1027 20:01:34.950816  460048 pod_ready.go:86] duration metric: took 181.897626ms for pod "kube-controller-manager-no-preload-300878" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:01:35.150338  460048 pod_ready.go:83] waiting for pod "kube-proxy-wpv4w" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:01:35.551499  460048 pod_ready.go:94] pod "kube-proxy-wpv4w" is "Ready"
	I1027 20:01:35.551527  460048 pod_ready.go:86] duration metric: took 401.158248ms for pod "kube-proxy-wpv4w" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:01:35.750780  460048 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-300878" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:01:36.149903  460048 pod_ready.go:94] pod "kube-scheduler-no-preload-300878" is "Ready"
	I1027 20:01:36.149930  460048 pod_ready.go:86] duration metric: took 399.123737ms for pod "kube-scheduler-no-preload-300878" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:01:36.149941  460048 pod_ready.go:40] duration metric: took 38.910741823s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 20:01:36.218433  460048 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 20:01:36.221592  460048 out.go:179] * Done! kubectl is now configured to use "no-preload-300878" cluster and "default" namespace by default
	I1027 20:01:36.178799  462995 cli_runner.go:164] Run: docker network inspect embed-certs-629838 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 20:01:36.198470  462995 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1027 20:01:36.204006  462995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 20:01:36.219940  462995 kubeadm.go:883] updating cluster {Name:embed-certs-629838 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-629838 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 20:01:36.220078  462995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 20:01:36.220143  462995 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 20:01:36.286214  462995 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 20:01:36.286240  462995 crio.go:433] Images already preloaded, skipping extraction
	I1027 20:01:36.286297  462995 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 20:01:36.323318  462995 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 20:01:36.323343  462995 cache_images.go:85] Images are preloaded, skipping loading
	I1027 20:01:36.323351  462995 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1027 20:01:36.323461  462995 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-629838 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-629838 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 20:01:36.323552  462995 ssh_runner.go:195] Run: crio config
	I1027 20:01:36.411757  462995 cni.go:84] Creating CNI manager for ""
	I1027 20:01:36.411824  462995 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 20:01:36.411861  462995 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 20:01:36.411913  462995 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-629838 NodeName:embed-certs-629838 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 20:01:36.412175  462995 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-629838"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 20:01:36.412270  462995 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 20:01:36.420959  462995 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 20:01:36.421035  462995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 20:01:36.435459  462995 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1027 20:01:36.450638  462995 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 20:01:36.470156  462995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1027 20:01:36.489409  462995 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1027 20:01:36.494344  462995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 20:01:36.507912  462995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:01:36.690261  462995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 20:01:36.707735  462995 certs.go:69] Setting up /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838 for IP: 192.168.76.2
	I1027 20:01:36.707752  462995 certs.go:195] generating shared ca certs ...
	I1027 20:01:36.707769  462995 certs.go:227] acquiring lock for ca certs: {Name:mk172548fc54811b51040cc0201d01eb4b3dd19b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:01:36.707928  462995 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21801-266035/.minikube/ca.key
	I1027 20:01:36.707973  462995 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.key
	I1027 20:01:36.707980  462995 certs.go:257] generating profile certs ...
	I1027 20:01:36.708077  462995 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/client.key
	I1027 20:01:36.708138  462995 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/apiserver.key.4ab968a1
	I1027 20:01:36.708177  462995 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/proxy-client.key
	I1027 20:01:36.708293  462995 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880.pem (1338 bytes)
	W1027 20:01:36.708322  462995 certs.go:480] ignoring /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880_empty.pem, impossibly tiny 0 bytes
	I1027 20:01:36.708330  462995 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 20:01:36.708353  462995 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem (1078 bytes)
	I1027 20:01:36.708375  462995 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem (1123 bytes)
	I1027 20:01:36.708396  462995 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem (1675 bytes)
	I1027 20:01:36.708435  462995 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem (1708 bytes)
	I1027 20:01:36.709017  462995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 20:01:36.744889  462995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 20:01:36.777584  462995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 20:01:36.794875  462995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 20:01:36.815804  462995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1027 20:01:36.834379  462995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1027 20:01:36.853708  462995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 20:01:36.882817  462995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 20:01:36.902280  462995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem --> /usr/share/ca-certificates/2678802.pem (1708 bytes)
	I1027 20:01:36.920069  462995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 20:01:36.937879  462995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880.pem --> /usr/share/ca-certificates/267880.pem (1338 bytes)
	I1027 20:01:36.966080  462995 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 20:01:36.981440  462995 ssh_runner.go:195] Run: openssl version
	I1027 20:01:36.989383  462995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 20:01:36.999806  462995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:01:37.007330  462995 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:01:37.007465  462995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:01:37.055315  462995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 20:01:37.063170  462995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/267880.pem && ln -fs /usr/share/ca-certificates/267880.pem /etc/ssl/certs/267880.pem"
	I1027 20:01:37.071876  462995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/267880.pem
	I1027 20:01:37.075564  462995 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 19:04 /usr/share/ca-certificates/267880.pem
	I1027 20:01:37.075629  462995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/267880.pem
	I1027 20:01:37.117361  462995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/267880.pem /etc/ssl/certs/51391683.0"
	I1027 20:01:37.125994  462995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2678802.pem && ln -fs /usr/share/ca-certificates/2678802.pem /etc/ssl/certs/2678802.pem"
	I1027 20:01:37.135835  462995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2678802.pem
	I1027 20:01:37.140087  462995 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 19:04 /usr/share/ca-certificates/2678802.pem
	I1027 20:01:37.140247  462995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2678802.pem
	I1027 20:01:37.182159  462995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2678802.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 20:01:37.191516  462995 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 20:01:37.195534  462995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 20:01:37.238399  462995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 20:01:37.279918  462995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 20:01:37.321139  462995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 20:01:37.362872  462995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 20:01:37.403995  462995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 20:01:37.447091  462995 kubeadm.go:400] StartCluster: {Name:embed-certs-629838 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-629838 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 20:01:37.447188  462995 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 20:01:37.447308  462995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 20:01:37.490540  462995 cri.go:89] found id: ""
	I1027 20:01:37.490640  462995 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 20:01:37.500149  462995 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1027 20:01:37.500171  462995 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1027 20:01:37.500235  462995 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 20:01:37.518670  462995 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 20:01:37.519273  462995 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-629838" does not appear in /home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 20:01:37.519600  462995 kubeconfig.go:62] /home/jenkins/minikube-integration/21801-266035/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-629838" cluster setting kubeconfig missing "embed-certs-629838" context setting]
	I1027 20:01:37.520107  462995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/kubeconfig: {Name:mk0a03a594f8d9f9199b1c7cff7c550cec8414c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:01:37.521481  462995 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 20:01:37.537828  462995 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1027 20:01:37.537874  462995 kubeadm.go:601] duration metric: took 37.687919ms to restartPrimaryControlPlane
	I1027 20:01:37.537884  462995 kubeadm.go:402] duration metric: took 90.812698ms to StartCluster
	I1027 20:01:37.537926  462995 settings.go:142] acquiring lock: {Name:mk49b9e2e9b7901c6b51e89b8b1e8ee7f9e88107 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:01:37.538009  462995 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 20:01:37.539429  462995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/kubeconfig: {Name:mk0a03a594f8d9f9199b1c7cff7c550cec8414c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:01:37.539929  462995 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 20:01:37.540195  462995 config.go:182] Loaded profile config "embed-certs-629838": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:01:37.540346  462995 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 20:01:37.540426  462995 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-629838"
	I1027 20:01:37.540439  462995 addons.go:69] Setting dashboard=true in profile "embed-certs-629838"
	I1027 20:01:37.540446  462995 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-629838"
	I1027 20:01:37.540452  462995 addons.go:238] Setting addon dashboard=true in "embed-certs-629838"
	W1027 20:01:37.540459  462995 addons.go:247] addon dashboard should already be in state true
	W1027 20:01:37.540453  462995 addons.go:247] addon storage-provisioner should already be in state true
	I1027 20:01:37.540486  462995 host.go:66] Checking if "embed-certs-629838" exists ...
	I1027 20:01:37.540493  462995 host.go:66] Checking if "embed-certs-629838" exists ...
	I1027 20:01:37.540960  462995 cli_runner.go:164] Run: docker container inspect embed-certs-629838 --format={{.State.Status}}
	I1027 20:01:37.540970  462995 cli_runner.go:164] Run: docker container inspect embed-certs-629838 --format={{.State.Status}}
	I1027 20:01:37.540459  462995 addons.go:69] Setting default-storageclass=true in profile "embed-certs-629838"
	I1027 20:01:37.541523  462995 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-629838"
	I1027 20:01:37.541783  462995 cli_runner.go:164] Run: docker container inspect embed-certs-629838 --format={{.State.Status}}
	I1027 20:01:37.548701  462995 out.go:179] * Verifying Kubernetes components...
	I1027 20:01:37.565625  462995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:01:37.593461  462995 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 20:01:37.596859  462995 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1027 20:01:37.596975  462995 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 20:01:37.596986  462995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 20:01:37.597051  462995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629838
	I1027 20:01:37.605190  462995 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1027 20:01:37.608131  462995 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1027 20:01:37.608157  462995 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1027 20:01:37.608225  462995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629838
	I1027 20:01:37.616441  462995 addons.go:238] Setting addon default-storageclass=true in "embed-certs-629838"
	W1027 20:01:37.616464  462995 addons.go:247] addon default-storageclass should already be in state true
	I1027 20:01:37.616487  462995 host.go:66] Checking if "embed-certs-629838" exists ...
	I1027 20:01:37.616914  462995 cli_runner.go:164] Run: docker container inspect embed-certs-629838 --format={{.State.Status}}
	I1027 20:01:37.644963  462995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/embed-certs-629838/id_rsa Username:docker}
	I1027 20:01:37.668778  462995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/embed-certs-629838/id_rsa Username:docker}
	I1027 20:01:37.680851  462995 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 20:01:37.680873  462995 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 20:01:37.680932  462995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629838
	I1027 20:01:37.706299  462995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/embed-certs-629838/id_rsa Username:docker}
	I1027 20:01:37.908919  462995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 20:01:37.932674  462995 node_ready.go:35] waiting up to 6m0s for node "embed-certs-629838" to be "Ready" ...
	I1027 20:01:37.965532  462995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 20:01:38.039363  462995 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1027 20:01:38.039387  462995 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1027 20:01:38.081538  462995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 20:01:38.121006  462995 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1027 20:01:38.121075  462995 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1027 20:01:38.162751  462995 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1027 20:01:38.162825  462995 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1027 20:01:38.217182  462995 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1027 20:01:38.217253  462995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1027 20:01:38.261360  462995 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1027 20:01:38.261430  462995 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1027 20:01:38.274878  462995 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1027 20:01:38.274949  462995 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1027 20:01:38.288704  462995 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1027 20:01:38.288776  462995 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1027 20:01:38.301449  462995 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1027 20:01:38.301520  462995 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1027 20:01:38.315677  462995 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 20:01:38.315746  462995 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1027 20:01:38.328405  462995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 20:01:42.487268  462995 node_ready.go:49] node "embed-certs-629838" is "Ready"
	I1027 20:01:42.487304  462995 node_ready.go:38] duration metric: took 4.554590314s for node "embed-certs-629838" to be "Ready" ...
	I1027 20:01:42.487317  462995 api_server.go:52] waiting for apiserver process to appear ...
	I1027 20:01:42.487376  462995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 20:01:44.067599  462995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.102033779s)
	I1027 20:01:44.067711  462995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.986100965s)
	I1027 20:01:44.132211  462995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.803719799s)
	I1027 20:01:44.132449  462995 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.645057063s)
	I1027 20:01:44.132489  462995 api_server.go:72] duration metric: took 6.592524488s to wait for apiserver process to appear ...
	I1027 20:01:44.132510  462995 api_server.go:88] waiting for apiserver healthz status ...
	I1027 20:01:44.132542  462995 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 20:01:44.135382  462995 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-629838 addons enable metrics-server
	
	I1027 20:01:44.138270  462995 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1027 20:01:44.141108  462995 addons.go:514] duration metric: took 6.600758834s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1027 20:01:44.145345  462995 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 20:01:44.145367  462995 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 20:01:44.632927  462995 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 20:01:44.641645  462995 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1027 20:01:44.642699  462995 api_server.go:141] control plane version: v1.34.1
	I1027 20:01:44.642758  462995 api_server.go:131] duration metric: took 510.228202ms to wait for apiserver health ...
	I1027 20:01:44.642798  462995 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 20:01:44.654508  462995 system_pods.go:59] 8 kube-system pods found
	I1027 20:01:44.654623  462995 system_pods.go:61] "coredns-66bc5c9577-ch8jv" [31b0e0f4-af1b-40c7-9b20-0941025a0e20] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:01:44.654649  462995 system_pods.go:61] "etcd-embed-certs-629838" [8a3ea38f-5f58-4bc1-92a6-f623c5bbf89a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 20:01:44.654684  462995 system_pods.go:61] "kindnet-cfqpk" [26bd03b9-d7d1-42d3-9a0f-57a0079df4df] Running
	I1027 20:01:44.654715  462995 system_pods.go:61] "kube-apiserver-embed-certs-629838" [5a679f7d-bd7f-41f2-8494-368533168d94] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 20:01:44.654740  462995 system_pods.go:61] "kube-controller-manager-embed-certs-629838" [208c550a-f378-40f6-aa34-d3e6f628ec0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 20:01:44.654763  462995 system_pods.go:61] "kube-proxy-bwql6" [41eb9367-187c-4241-967f-46d7e5ff9003] Running
	I1027 20:01:44.654799  462995 system_pods.go:61] "kube-scheduler-embed-certs-629838" [79fc7d25-8b0a-46c9-82f0-714850ebf675] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 20:01:44.654837  462995 system_pods.go:61] "storage-provisioner" [39cc3c46-ef65-4c2e-8c82-6273f639f702] Running
	I1027 20:01:44.654860  462995 system_pods.go:74] duration metric: took 12.037833ms to wait for pod list to return data ...
	I1027 20:01:44.654882  462995 default_sa.go:34] waiting for default service account to be created ...
	I1027 20:01:44.660144  462995 default_sa.go:45] found service account: "default"
	I1027 20:01:44.660207  462995 default_sa.go:55] duration metric: took 5.291602ms for default service account to be created ...
	I1027 20:01:44.660232  462995 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 20:01:44.665243  462995 system_pods.go:86] 8 kube-system pods found
	I1027 20:01:44.665328  462995 system_pods.go:89] "coredns-66bc5c9577-ch8jv" [31b0e0f4-af1b-40c7-9b20-0941025a0e20] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:01:44.665355  462995 system_pods.go:89] "etcd-embed-certs-629838" [8a3ea38f-5f58-4bc1-92a6-f623c5bbf89a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 20:01:44.665396  462995 system_pods.go:89] "kindnet-cfqpk" [26bd03b9-d7d1-42d3-9a0f-57a0079df4df] Running
	I1027 20:01:44.665427  462995 system_pods.go:89] "kube-apiserver-embed-certs-629838" [5a679f7d-bd7f-41f2-8494-368533168d94] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 20:01:44.665453  462995 system_pods.go:89] "kube-controller-manager-embed-certs-629838" [208c550a-f378-40f6-aa34-d3e6f628ec0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 20:01:44.665475  462995 system_pods.go:89] "kube-proxy-bwql6" [41eb9367-187c-4241-967f-46d7e5ff9003] Running
	I1027 20:01:44.665511  462995 system_pods.go:89] "kube-scheduler-embed-certs-629838" [79fc7d25-8b0a-46c9-82f0-714850ebf675] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 20:01:44.665541  462995 system_pods.go:89] "storage-provisioner" [39cc3c46-ef65-4c2e-8c82-6273f639f702] Running
	I1027 20:01:44.665566  462995 system_pods.go:126] duration metric: took 5.314813ms to wait for k8s-apps to be running ...
	I1027 20:01:44.665588  462995 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 20:01:44.665672  462995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 20:01:44.679720  462995 system_svc.go:56] duration metric: took 14.123371ms WaitForService to wait for kubelet
	I1027 20:01:44.679756  462995 kubeadm.go:586] duration metric: took 7.13979002s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 20:01:44.679776  462995 node_conditions.go:102] verifying NodePressure condition ...
	I1027 20:01:44.682851  462995 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 20:01:44.682880  462995 node_conditions.go:123] node cpu capacity is 2
	I1027 20:01:44.682901  462995 node_conditions.go:105] duration metric: took 3.111388ms to run NodePressure ...
	I1027 20:01:44.682915  462995 start.go:241] waiting for startup goroutines ...
	I1027 20:01:44.682926  462995 start.go:246] waiting for cluster config update ...
	I1027 20:01:44.682937  462995 start.go:255] writing updated cluster config ...
	I1027 20:01:44.683329  462995 ssh_runner.go:195] Run: rm -f paused
	I1027 20:01:44.687463  462995 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 20:01:44.692303  462995 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ch8jv" in "kube-system" namespace to be "Ready" or be gone ...
	W1027 20:01:46.697902  462995 pod_ready.go:104] pod "coredns-66bc5c9577-ch8jv" is not "Ready", error: <nil>
	W1027 20:01:48.708297  462995 pod_ready.go:104] pod "coredns-66bc5c9577-ch8jv" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 27 20:01:20 no-preload-300878 crio[650]: time="2025-10-27T20:01:20.724724221Z" level=info msg="Removed container 68caf56d837a229d05c8f1654e944dba99fa24fa78befabc04c543175571b2ad: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p8q7f/dashboard-metrics-scraper" id=d742916f-69e3-42be-a116-bcdcf2f6c3d3 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 20:01:26 no-preload-300878 conmon[1139]: conmon ce4b2f0831d4b6d80de7 <ninfo>: container 1146 exited with status 1
	Oct 27 20:01:26 no-preload-300878 crio[650]: time="2025-10-27T20:01:26.720403968Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=33c6561d-da46-48b5-b4fa-40c7b8af9c7c name=/runtime.v1.ImageService/ImageStatus
	Oct 27 20:01:26 no-preload-300878 crio[650]: time="2025-10-27T20:01:26.721902273Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1f40684e-b23b-47e3-8cba-96f76a9ce5b1 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 20:01:26 no-preload-300878 crio[650]: time="2025-10-27T20:01:26.723772719Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=d19a84e3-6e0f-463d-a097-85b3d73dcadc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 20:01:26 no-preload-300878 crio[650]: time="2025-10-27T20:01:26.724016215Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:01:26 no-preload-300878 crio[650]: time="2025-10-27T20:01:26.73052631Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:01:26 no-preload-300878 crio[650]: time="2025-10-27T20:01:26.730846481Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/08993c2fea66d54883bfee7248cb629ecc8f4582be7b4b473609cb4c530d3969/merged/etc/passwd: no such file or directory"
	Oct 27 20:01:26 no-preload-300878 crio[650]: time="2025-10-27T20:01:26.730941814Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/08993c2fea66d54883bfee7248cb629ecc8f4582be7b4b473609cb4c530d3969/merged/etc/group: no such file or directory"
	Oct 27 20:01:26 no-preload-300878 crio[650]: time="2025-10-27T20:01:26.731267803Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:01:26 no-preload-300878 crio[650]: time="2025-10-27T20:01:26.751356658Z" level=info msg="Created container b445fba8c8ac67334825dd1de6351eea9e0bc3f983f24bfdffca5c882248e749: kube-system/storage-provisioner/storage-provisioner" id=d19a84e3-6e0f-463d-a097-85b3d73dcadc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 20:01:26 no-preload-300878 crio[650]: time="2025-10-27T20:01:26.756295851Z" level=info msg="Starting container: b445fba8c8ac67334825dd1de6351eea9e0bc3f983f24bfdffca5c882248e749" id=9e6a4105-ce58-4491-a716-5a9856025d8e name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 20:01:26 no-preload-300878 crio[650]: time="2025-10-27T20:01:26.758002659Z" level=info msg="Started container" PID=1626 containerID=b445fba8c8ac67334825dd1de6351eea9e0bc3f983f24bfdffca5c882248e749 description=kube-system/storage-provisioner/storage-provisioner id=9e6a4105-ce58-4491-a716-5a9856025d8e name=/runtime.v1.RuntimeService/StartContainer sandboxID=29561a3cf2bbe151847e8a3e42dfee256bcf867b8934334749987b9529e1211a
	Oct 27 20:01:36 no-preload-300878 crio[650]: time="2025-10-27T20:01:36.566874405Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 20:01:36 no-preload-300878 crio[650]: time="2025-10-27T20:01:36.573071482Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 20:01:36 no-preload-300878 crio[650]: time="2025-10-27T20:01:36.573095572Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 20:01:36 no-preload-300878 crio[650]: time="2025-10-27T20:01:36.573115526Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 20:01:36 no-preload-300878 crio[650]: time="2025-10-27T20:01:36.581435161Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 20:01:36 no-preload-300878 crio[650]: time="2025-10-27T20:01:36.581469999Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 20:01:36 no-preload-300878 crio[650]: time="2025-10-27T20:01:36.58148819Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 20:01:36 no-preload-300878 crio[650]: time="2025-10-27T20:01:36.587836344Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 20:01:36 no-preload-300878 crio[650]: time="2025-10-27T20:01:36.587869508Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 20:01:36 no-preload-300878 crio[650]: time="2025-10-27T20:01:36.587890964Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 20:01:36 no-preload-300878 crio[650]: time="2025-10-27T20:01:36.599833291Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 20:01:36 no-preload-300878 crio[650]: time="2025-10-27T20:01:36.599868318Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	b445fba8c8ac6       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           24 seconds ago       Running             storage-provisioner         2                   29561a3cf2bbe       storage-provisioner                          kube-system
	ff4d1e7aa0afd       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           31 seconds ago       Exited              dashboard-metrics-scraper   2                   44ac3d2be1273       dashboard-metrics-scraper-6ffb444bf9-p8q7f   kubernetes-dashboard
	845ed893ecacd       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   40 seconds ago       Running             kubernetes-dashboard        0                   12950a2f56cc5       kubernetes-dashboard-855c9754f9-hqxgb        kubernetes-dashboard
	b6ddf809f10be       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   2cf02bb1a4d05       busybox                                      default
	e2edb66752f03       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           55 seconds ago       Running             kube-proxy                  1                   b5380d652fbbd       kube-proxy-wpv4w                             kube-system
	ce4b2f0831d4b       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           55 seconds ago       Exited              storage-provisioner         1                   29561a3cf2bbe       storage-provisioner                          kube-system
	3627ed707f2d5       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           55 seconds ago       Running             coredns                     1                   f4a9e567480ee       coredns-66bc5c9577-jlg4z                     kube-system
	dab65465dca39       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           55 seconds ago       Running             kindnet-cni                 1                   8e47a9a281004       kindnet-smnp2                                kube-system
	13d030edd8243       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   53fc1fe688275       kube-apiserver-no-preload-300878             kube-system
	e280576b7cd34       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   a71254e85ef08       etcd-no-preload-300878                       kube-system
	2e749e0d2383f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   2e55a0b642cf6       kube-controller-manager-no-preload-300878    kube-system
	75c7134600c5c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   a8799155ff185       kube-scheduler-no-preload-300878             kube-system
	
	
	==> coredns [3627ed707f2d56dfd79e2f7904b8af77c14c72df05c03340c5194af8a728a9c5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35597 - 966 "HINFO IN 3528978051479716237.7348152511424593718. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028944205s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-300878
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-300878
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=no-preload-300878
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T19_59_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 19:59:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-300878
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 20:01:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 20:01:36 +0000   Mon, 27 Oct 2025 19:59:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 20:01:36 +0000   Mon, 27 Oct 2025 19:59:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 20:01:36 +0000   Mon, 27 Oct 2025 19:59:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 20:01:36 +0000   Mon, 27 Oct 2025 20:00:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-300878
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                efc50928-8e8e-470b-97b1-2b65f64ae45b
	  Boot ID:                    23ea05b4-8203-4fb9-a84a-5deb4b091cbb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-jlg4z                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     114s
	  kube-system                 etcd-no-preload-300878                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m2s
	  kube-system                 kindnet-smnp2                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      114s
	  kube-system                 kube-apiserver-no-preload-300878              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-no-preload-300878     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-wpv4w                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-scheduler-no-preload-300878              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-p8q7f    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-hqxgb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 112s                 kube-proxy       
	  Normal   Starting                 54s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  2m9s (x8 over 2m9s)  kubelet          Node no-preload-300878 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m9s (x8 over 2m9s)  kubelet          Node no-preload-300878 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m9s (x8 over 2m9s)  kubelet          Node no-preload-300878 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m                   kubelet          Node no-preload-300878 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m                   kubelet          Node no-preload-300878 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m                   kubelet          Node no-preload-300878 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           115s                 node-controller  Node no-preload-300878 event: Registered Node no-preload-300878 in Controller
	  Normal   NodeReady                97s                  kubelet          Node no-preload-300878 status is now: NodeReady
	  Normal   Starting                 61s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s (x8 over 61s)    kubelet          Node no-preload-300878 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s (x8 over 61s)    kubelet          Node no-preload-300878 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s (x8 over 61s)    kubelet          Node no-preload-300878 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           53s                  node-controller  Node no-preload-300878 event: Registered Node no-preload-300878 in Controller
	
	
	==> dmesg <==
	[Oct27 19:37] overlayfs: idmapped layers are currently not supported
	[Oct27 19:38] overlayfs: idmapped layers are currently not supported
	[Oct27 19:39] overlayfs: idmapped layers are currently not supported
	[Oct27 19:40] overlayfs: idmapped layers are currently not supported
	[  +7.015891] overlayfs: idmapped layers are currently not supported
	[Oct27 19:41] overlayfs: idmapped layers are currently not supported
	[ +28.455216] overlayfs: idmapped layers are currently not supported
	[Oct27 19:42] overlayfs: idmapped layers are currently not supported
	[ +26.490174] overlayfs: idmapped layers are currently not supported
	[Oct27 19:44] overlayfs: idmapped layers are currently not supported
	[Oct27 19:45] overlayfs: idmapped layers are currently not supported
	[Oct27 19:47] overlayfs: idmapped layers are currently not supported
	[Oct27 19:49] overlayfs: idmapped layers are currently not supported
	[ +31.410335] overlayfs: idmapped layers are currently not supported
	[Oct27 19:51] overlayfs: idmapped layers are currently not supported
	[Oct27 19:53] overlayfs: idmapped layers are currently not supported
	[Oct27 19:54] overlayfs: idmapped layers are currently not supported
	[Oct27 19:55] overlayfs: idmapped layers are currently not supported
	[Oct27 19:56] overlayfs: idmapped layers are currently not supported
	[Oct27 19:57] overlayfs: idmapped layers are currently not supported
	[Oct27 19:58] overlayfs: idmapped layers are currently not supported
	[Oct27 19:59] overlayfs: idmapped layers are currently not supported
	[Oct27 20:00] overlayfs: idmapped layers are currently not supported
	[ +41.321877] overlayfs: idmapped layers are currently not supported
	[Oct27 20:01] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e280576b7cd349242f75cfabe39f57b429b7f1be59a692085c1ac72054e39d40] <==
	{"level":"warn","ts":"2025-10-27T20:00:53.796242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:53.850575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:53.854851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:53.871568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:53.892631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:53.911266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:53.923966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:53.944707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:53.959898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:53.978408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:54.005220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:54.023055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:54.042902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:54.059075Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:54.073909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:54.091298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:54.108519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:54.132121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:54.146322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:54.192119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:54.236838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:54.269469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:54.286282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:54.303394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:54.374275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58258","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:01:52 up  2:44,  0 user,  load average: 2.81, 2.98, 2.62
	Linux no-preload-300878 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [dab65465dca396808c706937f4961cc55e7b5490a396435f7a5ce712e477451c] <==
	I1027 20:00:56.391570       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 20:00:56.391797       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1027 20:00:56.391931       1 main.go:148] setting mtu 1500 for CNI 
	I1027 20:00:56.391943       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 20:00:56.391954       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T20:00:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 20:00:56.565743       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 20:00:56.565767       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 20:00:56.565776       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 20:00:56.619556       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1027 20:01:26.565966       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1027 20:01:26.619673       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1027 20:01:26.619673       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1027 20:01:26.619873       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1027 20:01:28.166266       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 20:01:28.166308       1 metrics.go:72] Registering metrics
	I1027 20:01:28.166382       1 controller.go:711] "Syncing nftables rules"
	I1027 20:01:36.565390       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 20:01:36.565436       1 main.go:301] handling current node
	I1027 20:01:46.567051       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 20:01:46.567195       1 main.go:301] handling current node
	
	
	==> kube-apiserver [13d030edd8243f160331166818aa22d925ebe9d23d02c061f90430ac5760b9ea] <==
	I1027 20:00:55.596274       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1027 20:00:55.596314       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1027 20:00:55.596350       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 20:00:55.603882       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1027 20:00:55.604682       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1027 20:00:55.604763       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1027 20:00:55.618706       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1027 20:00:55.619292       1 aggregator.go:171] initial CRD sync complete...
	I1027 20:00:55.619309       1 autoregister_controller.go:144] Starting autoregister controller
	I1027 20:00:55.619316       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 20:00:55.619322       1 cache.go:39] Caches are synced for autoregister controller
	I1027 20:00:55.626791       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 20:00:55.628671       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1027 20:00:55.629040       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1027 20:00:55.755021       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1027 20:00:56.097239       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 20:00:56.833892       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 20:00:56.900985       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 20:00:56.937648       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 20:00:56.954796       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 20:00:57.092408       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.45.112"}
	I1027 20:00:57.128858       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.122.251"}
	I1027 20:00:58.614193       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 20:00:59.108240       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 20:00:59.214204       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [2e749e0d2383f2957caa1a338e460f11d3d14d875b53417f4e0cd2479aad76e0] <==
	I1027 20:00:58.615572       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1027 20:00:58.614864       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1027 20:00:58.618225       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 20:00:58.621293       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 20:00:58.626725       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1027 20:00:58.627385       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1027 20:00:58.633670       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 20:00:58.633779       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1027 20:00:58.633679       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 20:00:58.636387       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 20:00:58.636773       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1027 20:00:58.640101       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 20:00:58.644883       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 20:00:58.651497       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 20:00:58.652655       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1027 20:00:58.652664       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 20:00:58.652813       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 20:00:58.652841       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1027 20:00:58.652965       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 20:00:58.653277       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-300878"
	I1027 20:00:58.653352       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1027 20:00:58.657694       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 20:00:58.666897       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 20:00:59.114748       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	I1027 20:00:59.114955       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [e2edb66752f0320ca15324996f37b99874c7495a2aef8abe85781a5d7bfa18cf] <==
	I1027 20:00:56.963899       1 server_linux.go:53] "Using iptables proxy"
	I1027 20:00:57.192843       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 20:00:57.293982       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 20:00:57.295145       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1027 20:00:57.295254       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 20:00:57.340945       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 20:00:57.340995       1 server_linux.go:132] "Using iptables Proxier"
	I1027 20:00:57.348701       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 20:00:57.349175       1 server.go:527] "Version info" version="v1.34.1"
	I1027 20:00:57.349198       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 20:00:57.355749       1 config.go:106] "Starting endpoint slice config controller"
	I1027 20:00:57.355772       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 20:00:57.356056       1 config.go:200] "Starting service config controller"
	I1027 20:00:57.356071       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 20:00:57.358406       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 20:00:57.358436       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 20:00:57.359504       1 config.go:309] "Starting node config controller"
	I1027 20:00:57.359522       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 20:00:57.359529       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 20:00:57.456560       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 20:00:57.456662       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 20:00:57.458595       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [75c7134600c5c012795172e75d3a8a7ce7dfa5f7d06a557943649171a009abd6] <==
	I1027 20:00:54.633840       1 serving.go:386] Generated self-signed cert in-memory
	I1027 20:00:57.186845       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 20:00:57.186880       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 20:00:57.203837       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 20:00:57.206081       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1027 20:00:57.206202       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1027 20:00:57.206267       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 20:00:57.211234       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 20:00:57.211276       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 20:00:57.211298       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 20:00:57.211304       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 20:00:57.306476       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1027 20:00:57.311897       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 20:00:57.312004       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 20:00:59 no-preload-300878 kubelet[767]: I1027 20:00:59.082606     767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4w8x\" (UniqueName: \"kubernetes.io/projected/c3f77740-952e-48ea-b5fe-d07800ef585f-kube-api-access-l4w8x\") pod \"kubernetes-dashboard-855c9754f9-hqxgb\" (UID: \"c3f77740-952e-48ea-b5fe-d07800ef585f\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hqxgb"
	Oct 27 20:00:59 no-preload-300878 kubelet[767]: W1027 20:00:59.361711     767 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5f7533431bd6ad05927fc6ee4ffbe127ba4e775919d072603a24e3cd4e1d5d89/crio-44ac3d2be12733e7e06d23f818c327414a507746ad4091e630930a952398a8dc WatchSource:0}: Error finding container 44ac3d2be12733e7e06d23f818c327414a507746ad4091e630930a952398a8dc: Status 404 returned error can't find the container with id 44ac3d2be12733e7e06d23f818c327414a507746ad4091e630930a952398a8dc
	Oct 27 20:00:59 no-preload-300878 kubelet[767]: W1027 20:00:59.363412     767 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5f7533431bd6ad05927fc6ee4ffbe127ba4e775919d072603a24e3cd4e1d5d89/crio-12950a2f56cc558942c2b3d8792e5546d8432ff771d4e5cd01cd9e8906e265d0 WatchSource:0}: Error finding container 12950a2f56cc558942c2b3d8792e5546d8432ff771d4e5cd01cd9e8906e265d0: Status 404 returned error can't find the container with id 12950a2f56cc558942c2b3d8792e5546d8432ff771d4e5cd01cd9e8906e265d0
	Oct 27 20:01:04 no-preload-300878 kubelet[767]: I1027 20:01:04.584092     767 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 27 20:01:04 no-preload-300878 kubelet[767]: I1027 20:01:04.654647     767 scope.go:117] "RemoveContainer" containerID="791b4f9fec6840e7391b4608ef4231a34141f6988492418ab62f86aec29bb939"
	Oct 27 20:01:05 no-preload-300878 kubelet[767]: I1027 20:01:05.658809     767 scope.go:117] "RemoveContainer" containerID="791b4f9fec6840e7391b4608ef4231a34141f6988492418ab62f86aec29bb939"
	Oct 27 20:01:05 no-preload-300878 kubelet[767]: I1027 20:01:05.658946     767 scope.go:117] "RemoveContainer" containerID="68caf56d837a229d05c8f1654e944dba99fa24fa78befabc04c543175571b2ad"
	Oct 27 20:01:05 no-preload-300878 kubelet[767]: E1027 20:01:05.659154     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p8q7f_kubernetes-dashboard(76bb9163-da1c-4c9e-8453-a56a9f293563)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p8q7f" podUID="76bb9163-da1c-4c9e-8453-a56a9f293563"
	Oct 27 20:01:06 no-preload-300878 kubelet[767]: I1027 20:01:06.663988     767 scope.go:117] "RemoveContainer" containerID="68caf56d837a229d05c8f1654e944dba99fa24fa78befabc04c543175571b2ad"
	Oct 27 20:01:06 no-preload-300878 kubelet[767]: E1027 20:01:06.665641     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p8q7f_kubernetes-dashboard(76bb9163-da1c-4c9e-8453-a56a9f293563)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p8q7f" podUID="76bb9163-da1c-4c9e-8453-a56a9f293563"
	Oct 27 20:01:08 no-preload-300878 kubelet[767]: I1027 20:01:08.924519     767 scope.go:117] "RemoveContainer" containerID="68caf56d837a229d05c8f1654e944dba99fa24fa78befabc04c543175571b2ad"
	Oct 27 20:01:08 no-preload-300878 kubelet[767]: E1027 20:01:08.924734     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p8q7f_kubernetes-dashboard(76bb9163-da1c-4c9e-8453-a56a9f293563)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p8q7f" podUID="76bb9163-da1c-4c9e-8453-a56a9f293563"
	Oct 27 20:01:20 no-preload-300878 kubelet[767]: I1027 20:01:20.506610     767 scope.go:117] "RemoveContainer" containerID="68caf56d837a229d05c8f1654e944dba99fa24fa78befabc04c543175571b2ad"
	Oct 27 20:01:20 no-preload-300878 kubelet[767]: I1027 20:01:20.703222     767 scope.go:117] "RemoveContainer" containerID="68caf56d837a229d05c8f1654e944dba99fa24fa78befabc04c543175571b2ad"
	Oct 27 20:01:20 no-preload-300878 kubelet[767]: I1027 20:01:20.703480     767 scope.go:117] "RemoveContainer" containerID="ff4d1e7aa0afdad72f13361ebbf1d1951f4d0389c1da1547bb8463d6cbfc6973"
	Oct 27 20:01:20 no-preload-300878 kubelet[767]: E1027 20:01:20.703655     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p8q7f_kubernetes-dashboard(76bb9163-da1c-4c9e-8453-a56a9f293563)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p8q7f" podUID="76bb9163-da1c-4c9e-8453-a56a9f293563"
	Oct 27 20:01:20 no-preload-300878 kubelet[767]: I1027 20:01:20.722626     767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hqxgb" podStartSLOduration=10.875250099 podStartE2EDuration="22.72261022s" podCreationTimestamp="2025-10-27 20:00:58 +0000 UTC" firstStartedPulling="2025-10-27 20:00:59.368836026 +0000 UTC m=+9.074621729" lastFinishedPulling="2025-10-27 20:01:11.216196139 +0000 UTC m=+20.921981850" observedRunningTime="2025-10-27 20:01:11.69300447 +0000 UTC m=+21.398790173" watchObservedRunningTime="2025-10-27 20:01:20.72261022 +0000 UTC m=+30.428395923"
	Oct 27 20:01:26 no-preload-300878 kubelet[767]: I1027 20:01:26.719827     767 scope.go:117] "RemoveContainer" containerID="ce4b2f0831d4b6d80de7c02268477bd3775a8d23bb376b0d95e7a73ee6e7f12f"
	Oct 27 20:01:28 no-preload-300878 kubelet[767]: I1027 20:01:28.925100     767 scope.go:117] "RemoveContainer" containerID="ff4d1e7aa0afdad72f13361ebbf1d1951f4d0389c1da1547bb8463d6cbfc6973"
	Oct 27 20:01:28 no-preload-300878 kubelet[767]: E1027 20:01:28.925276     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p8q7f_kubernetes-dashboard(76bb9163-da1c-4c9e-8453-a56a9f293563)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p8q7f" podUID="76bb9163-da1c-4c9e-8453-a56a9f293563"
	Oct 27 20:01:40 no-preload-300878 kubelet[767]: I1027 20:01:40.507261     767 scope.go:117] "RemoveContainer" containerID="ff4d1e7aa0afdad72f13361ebbf1d1951f4d0389c1da1547bb8463d6cbfc6973"
	Oct 27 20:01:40 no-preload-300878 kubelet[767]: E1027 20:01:40.507458     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p8q7f_kubernetes-dashboard(76bb9163-da1c-4c9e-8453-a56a9f293563)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p8q7f" podUID="76bb9163-da1c-4c9e-8453-a56a9f293563"
	Oct 27 20:01:48 no-preload-300878 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 20:01:48 no-preload-300878 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 20:01:48 no-preload-300878 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [845ed893ecacdcea3dadb975971aab3db09c283ebdfc1d1660e45965d8599714] <==
	2025/10/27 20:01:11 Using namespace: kubernetes-dashboard
	2025/10/27 20:01:11 Using in-cluster config to connect to apiserver
	2025/10/27 20:01:11 Using secret token for csrf signing
	2025/10/27 20:01:11 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/27 20:01:11 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/27 20:01:11 Successful initial request to the apiserver, version: v1.34.1
	2025/10/27 20:01:11 Generating JWE encryption key
	2025/10/27 20:01:11 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/27 20:01:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/27 20:01:11 Initializing JWE encryption key from synchronized object
	2025/10/27 20:01:11 Creating in-cluster Sidecar client
	2025/10/27 20:01:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 20:01:11 Serving insecurely on HTTP port: 9090
	2025/10/27 20:01:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 20:01:11 Starting overwatch
	
	
	==> storage-provisioner [b445fba8c8ac67334825dd1de6351eea9e0bc3f983f24bfdffca5c882248e749] <==
	I1027 20:01:26.778448       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1027 20:01:26.790014       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 20:01:26.790131       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1027 20:01:26.792453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:30.252141       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:34.514394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:38.112894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:41.166679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:44.193710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:44.198505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 20:01:44.198682       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 20:01:44.199136       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3c1092eb-a7a2-455f-b121-d7c4d1adde3a", APIVersion:"v1", ResourceVersion:"677", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-300878_b9d259fd-3af8-4b1c-abe6-c933eb8312e2 became leader
	I1027 20:01:44.199372       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-300878_b9d259fd-3af8-4b1c-abe6-c933eb8312e2!
	W1027 20:01:44.211038       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:44.227400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 20:01:44.299945       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-300878_b9d259fd-3af8-4b1c-abe6-c933eb8312e2!
	W1027 20:01:46.230663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:46.237821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:48.241728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:48.252288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:50.255329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:50.267783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:52.270606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:52.276519       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ce4b2f0831d4b6d80de7c02268477bd3775a8d23bb376b0d95e7a73ee6e7f12f] <==
	I1027 20:00:56.444380       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1027 20:01:26.487246       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-300878 -n no-preload-300878
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-300878 -n no-preload-300878: exit status 2 (482.335615ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-300878 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-300878
helpers_test.go:243: (dbg) docker inspect no-preload-300878:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5f7533431bd6ad05927fc6ee4ffbe127ba4e775919d072603a24e3cd4e1d5d89",
	        "Created": "2025-10-27T19:59:03.085735227Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 460174,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T20:00:43.524296602Z",
	            "FinishedAt": "2025-10-27T20:00:42.71806872Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/5f7533431bd6ad05927fc6ee4ffbe127ba4e775919d072603a24e3cd4e1d5d89/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5f7533431bd6ad05927fc6ee4ffbe127ba4e775919d072603a24e3cd4e1d5d89/hostname",
	        "HostsPath": "/var/lib/docker/containers/5f7533431bd6ad05927fc6ee4ffbe127ba4e775919d072603a24e3cd4e1d5d89/hosts",
	        "LogPath": "/var/lib/docker/containers/5f7533431bd6ad05927fc6ee4ffbe127ba4e775919d072603a24e3cd4e1d5d89/5f7533431bd6ad05927fc6ee4ffbe127ba4e775919d072603a24e3cd4e1d5d89-json.log",
	        "Name": "/no-preload-300878",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-300878:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-300878",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5f7533431bd6ad05927fc6ee4ffbe127ba4e775919d072603a24e3cd4e1d5d89",
	                "LowerDir": "/var/lib/docker/overlay2/9bbd17e120a72b16cae429750c48e4e412848fa0a221daef03291fd6af1df13e-init/diff:/var/lib/docker/overlay2/0567218bbe05e8b59b2aef7ad82032d6716040c1f9d9d73e7de8682101079f76/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9bbd17e120a72b16cae429750c48e4e412848fa0a221daef03291fd6af1df13e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9bbd17e120a72b16cae429750c48e4e412848fa0a221daef03291fd6af1df13e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9bbd17e120a72b16cae429750c48e4e412848fa0a221daef03291fd6af1df13e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-300878",
	                "Source": "/var/lib/docker/volumes/no-preload-300878/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-300878",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-300878",
	                "name.minikube.sigs.k8s.io": "no-preload-300878",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5f9fbf419a0051093d15a299a2c10a6faca066a07d1ce0492aebf175c3c99c37",
	            "SandboxKey": "/var/run/docker/netns/5f9fbf419a00",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-300878": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:82:b2:cc:e9:a9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "12fd71527f5be91d352c6fcacb328f609f1124632115c17524de411b48d37139",
	                    "EndpointID": "974cb468a3748c0de9246729814a896308acbd568be5371b60ecb575135f1f15",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-300878",
	                        "5f7533431bd6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-300878 -n no-preload-300878
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-300878 -n no-preload-300878: exit status 2 (561.068874ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-300878 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-300878 logs -n 25: (1.835942192s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cert-options-319273 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-319273    │ jenkins │ v1.37.0 │ 27 Oct 25 19:56 UTC │ 27 Oct 25 19:56 UTC │
	│ delete  │ -p cert-options-319273                                                                                                                                                                                                                        │ cert-options-319273    │ jenkins │ v1.37.0 │ 27 Oct 25 19:56 UTC │ 27 Oct 25 19:56 UTC │
	│ start   │ -p old-k8s-version-942644 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-942644 │ jenkins │ v1.37.0 │ 27 Oct 25 19:56 UTC │ 27 Oct 25 19:57 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-942644 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-942644 │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │                     │
	│ stop    │ -p old-k8s-version-942644 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-942644 │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │ 27 Oct 25 19:58 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-942644 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-942644 │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │ 27 Oct 25 19:58 UTC │
	│ start   │ -p old-k8s-version-942644 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-942644 │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │ 27 Oct 25 19:59 UTC │
	│ start   │ -p cert-expiration-280013 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-280013 │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │ 27 Oct 25 19:58 UTC │
	│ delete  │ -p cert-expiration-280013                                                                                                                                                                                                                     │ cert-expiration-280013 │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │ 27 Oct 25 19:59 UTC │
	│ start   │ -p no-preload-300878 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-300878      │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │ 27 Oct 25 20:00 UTC │
	│ image   │ old-k8s-version-942644 image list --format=json                                                                                                                                                                                               │ old-k8s-version-942644 │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │ 27 Oct 25 19:59 UTC │
	│ pause   │ -p old-k8s-version-942644 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-942644 │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │                     │
	│ delete  │ -p old-k8s-version-942644                                                                                                                                                                                                                     │ old-k8s-version-942644 │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │ 27 Oct 25 19:59 UTC │
	│ delete  │ -p old-k8s-version-942644                                                                                                                                                                                                                     │ old-k8s-version-942644 │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │ 27 Oct 25 19:59 UTC │
	│ start   │ -p embed-certs-629838 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-629838     │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │ 27 Oct 25 20:01 UTC │
	│ addons  │ enable metrics-server -p no-preload-300878 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-300878      │ jenkins │ v1.37.0 │ 27 Oct 25 20:00 UTC │                     │
	│ stop    │ -p no-preload-300878 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-300878      │ jenkins │ v1.37.0 │ 27 Oct 25 20:00 UTC │ 27 Oct 25 20:00 UTC │
	│ addons  │ enable dashboard -p no-preload-300878 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-300878      │ jenkins │ v1.37.0 │ 27 Oct 25 20:00 UTC │ 27 Oct 25 20:00 UTC │
	│ start   │ -p no-preload-300878 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-300878      │ jenkins │ v1.37.0 │ 27 Oct 25 20:00 UTC │ 27 Oct 25 20:01 UTC │
	│ addons  │ enable metrics-server -p embed-certs-629838 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-629838     │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │                     │
	│ stop    │ -p embed-certs-629838 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-629838     │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:01 UTC │
	│ addons  │ enable dashboard -p embed-certs-629838 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-629838     │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:01 UTC │
	│ start   │ -p embed-certs-629838 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-629838     │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │                     │
	│ image   │ no-preload-300878 image list --format=json                                                                                                                                                                                                    │ no-preload-300878      │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:01 UTC │
	│ pause   │ -p no-preload-300878 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-300878      │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 20:01:29
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 20:01:29.265783  462995 out.go:360] Setting OutFile to fd 1 ...
	I1027 20:01:29.266115  462995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:01:29.266155  462995 out.go:374] Setting ErrFile to fd 2...
	I1027 20:01:29.266175  462995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:01:29.266457  462995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 20:01:29.266870  462995 out.go:368] Setting JSON to false
	I1027 20:01:29.267913  462995 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9842,"bootTime":1761585448,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1027 20:01:29.268015  462995 start.go:141] virtualization:  
	I1027 20:01:29.271120  462995 out.go:179] * [embed-certs-629838] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 20:01:29.275036  462995 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 20:01:29.275162  462995 notify.go:220] Checking for updates...
	I1027 20:01:29.280975  462995 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 20:01:29.283884  462995 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 20:01:29.286727  462995 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-266035/.minikube
	I1027 20:01:29.289529  462995 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 20:01:29.292376  462995 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 20:01:29.295726  462995 config.go:182] Loaded profile config "embed-certs-629838": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:01:29.296331  462995 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 20:01:29.332066  462995 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 20:01:29.332240  462995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 20:01:29.399528  462995 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-27 20:01:29.389124253 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 20:01:29.399638  462995 docker.go:318] overlay module found
	I1027 20:01:29.402928  462995 out.go:179] * Using the docker driver based on existing profile
	I1027 20:01:29.406087  462995 start.go:305] selected driver: docker
	I1027 20:01:29.406108  462995 start.go:925] validating driver "docker" against &{Name:embed-certs-629838 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-629838 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 20:01:29.406218  462995 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 20:01:29.407084  462995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 20:01:29.457679  462995 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-27 20:01:29.448361319 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 20:01:29.458014  462995 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 20:01:29.458049  462995 cni.go:84] Creating CNI manager for ""
	I1027 20:01:29.458113  462995 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 20:01:29.458155  462995 start.go:349] cluster config:
	{Name:embed-certs-629838 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-629838 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 20:01:29.461395  462995 out.go:179] * Starting "embed-certs-629838" primary control-plane node in "embed-certs-629838" cluster
	I1027 20:01:29.464174  462995 cache.go:123] Beginning downloading kic base image for docker with crio
	I1027 20:01:29.467092  462995 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 20:01:29.469852  462995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 20:01:29.469903  462995 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1027 20:01:29.469914  462995 cache.go:58] Caching tarball of preloaded images
	I1027 20:01:29.469945  462995 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 20:01:29.470005  462995 preload.go:233] Found /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1027 20:01:29.470015  462995 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 20:01:29.470127  462995 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/config.json ...
	I1027 20:01:29.489141  462995 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 20:01:29.489162  462995 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 20:01:29.489180  462995 cache.go:232] Successfully downloaded all kic artifacts
	I1027 20:01:29.489202  462995 start.go:360] acquireMachinesLock for embed-certs-629838: {Name:mk8675e8c935af9c23da71750794b4a71f97e11f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 20:01:29.489258  462995 start.go:364] duration metric: took 39.023µs to acquireMachinesLock for "embed-certs-629838"
	I1027 20:01:29.489276  462995 start.go:96] Skipping create...Using existing machine configuration
	I1027 20:01:29.489282  462995 fix.go:54] fixHost starting: 
	I1027 20:01:29.489533  462995 cli_runner.go:164] Run: docker container inspect embed-certs-629838 --format={{.State.Status}}
	I1027 20:01:29.507387  462995 fix.go:112] recreateIfNeeded on embed-certs-629838: state=Stopped err=<nil>
	W1027 20:01:29.507423  462995 fix.go:138] unexpected machine state, will restart: <nil>
	W1027 20:01:29.252716  460048 pod_ready.go:104] pod "coredns-66bc5c9577-jlg4z" is not "Ready", error: <nil>
	W1027 20:01:31.256342  460048 pod_ready.go:104] pod "coredns-66bc5c9577-jlg4z" is not "Ready", error: <nil>
	I1027 20:01:29.510561  462995 out.go:252] * Restarting existing docker container for "embed-certs-629838" ...
	I1027 20:01:29.510660  462995 cli_runner.go:164] Run: docker start embed-certs-629838
	I1027 20:01:29.763801  462995 cli_runner.go:164] Run: docker container inspect embed-certs-629838 --format={{.State.Status}}
	I1027 20:01:29.786256  462995 kic.go:430] container "embed-certs-629838" state is running.
	I1027 20:01:29.786764  462995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-629838
	I1027 20:01:29.810976  462995 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/config.json ...
	I1027 20:01:29.811244  462995 machine.go:93] provisionDockerMachine start ...
	I1027 20:01:29.811303  462995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629838
	I1027 20:01:29.836016  462995 main.go:141] libmachine: Using SSH client type: native
	I1027 20:01:29.836463  462995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1027 20:01:29.836477  462995 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 20:01:29.837479  462995 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1027 20:01:32.986640  462995 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-629838
	
	I1027 20:01:32.986669  462995 ubuntu.go:182] provisioning hostname "embed-certs-629838"
	I1027 20:01:32.986738  462995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629838
	I1027 20:01:33.015247  462995 main.go:141] libmachine: Using SSH client type: native
	I1027 20:01:33.015580  462995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1027 20:01:33.015599  462995 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-629838 && echo "embed-certs-629838" | sudo tee /etc/hostname
	I1027 20:01:33.172637  462995 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-629838
	
	I1027 20:01:33.172746  462995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629838
	I1027 20:01:33.199099  462995 main.go:141] libmachine: Using SSH client type: native
	I1027 20:01:33.199412  462995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1027 20:01:33.199436  462995 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-629838' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-629838/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-629838' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 20:01:33.363483  462995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 20:01:33.363574  462995 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21801-266035/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-266035/.minikube}
	I1027 20:01:33.363638  462995 ubuntu.go:190] setting up certificates
	I1027 20:01:33.363679  462995 provision.go:84] configureAuth start
	I1027 20:01:33.363769  462995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-629838
	I1027 20:01:33.380639  462995 provision.go:143] copyHostCerts
	I1027 20:01:33.380711  462995 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem, removing ...
	I1027 20:01:33.380733  462995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem
	I1027 20:01:33.380818  462995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem (1123 bytes)
	I1027 20:01:33.380920  462995 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem, removing ...
	I1027 20:01:33.380931  462995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem
	I1027 20:01:33.380959  462995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem (1675 bytes)
	I1027 20:01:33.381020  462995 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem, removing ...
	I1027 20:01:33.381029  462995 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem
	I1027 20:01:33.381054  462995 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem (1078 bytes)
	I1027 20:01:33.381110  462995 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem org=jenkins.embed-certs-629838 san=[127.0.0.1 192.168.76.2 embed-certs-629838 localhost minikube]
	I1027 20:01:33.922784  462995 provision.go:177] copyRemoteCerts
	I1027 20:01:33.922866  462995 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 20:01:33.922917  462995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629838
	I1027 20:01:33.941193  462995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/embed-certs-629838/id_rsa Username:docker}
	I1027 20:01:34.051601  462995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 20:01:34.073061  462995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 20:01:34.092112  462995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1027 20:01:34.110620  462995 provision.go:87] duration metric: took 746.901886ms to configureAuth
	I1027 20:01:34.110702  462995 ubuntu.go:206] setting minikube options for container-runtime
	I1027 20:01:34.110936  462995 config.go:182] Loaded profile config "embed-certs-629838": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:01:34.111132  462995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629838
	I1027 20:01:34.130041  462995 main.go:141] libmachine: Using SSH client type: native
	I1027 20:01:34.130356  462995 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1027 20:01:34.130370  462995 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 20:01:34.461739  462995 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 20:01:34.461759  462995 machine.go:96] duration metric: took 4.650505847s to provisionDockerMachine
	I1027 20:01:34.461769  462995 start.go:293] postStartSetup for "embed-certs-629838" (driver="docker")
	I1027 20:01:34.461780  462995 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 20:01:34.461855  462995 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 20:01:34.461895  462995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629838
	I1027 20:01:34.483779  462995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/embed-certs-629838/id_rsa Username:docker}
	I1027 20:01:34.598205  462995 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 20:01:34.604921  462995 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 20:01:34.604949  462995 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 20:01:34.604960  462995 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-266035/.minikube/addons for local assets ...
	I1027 20:01:34.605016  462995 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-266035/.minikube/files for local assets ...
	I1027 20:01:34.605109  462995 filesync.go:149] local asset: /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem -> 2678802.pem in /etc/ssl/certs
	I1027 20:01:34.605213  462995 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 20:01:34.616959  462995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem --> /etc/ssl/certs/2678802.pem (1708 bytes)
	I1027 20:01:34.645990  462995 start.go:296] duration metric: took 184.205409ms for postStartSetup
	I1027 20:01:34.646145  462995 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 20:01:34.646218  462995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629838
	I1027 20:01:34.664079  462995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/embed-certs-629838/id_rsa Username:docker}
	I1027 20:01:34.771013  462995 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 20:01:34.776626  462995 fix.go:56] duration metric: took 5.28733534s for fixHost
	I1027 20:01:34.776648  462995 start.go:83] releasing machines lock for "embed-certs-629838", held for 5.287381961s
	I1027 20:01:34.776725  462995 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-629838
	I1027 20:01:34.793484  462995 ssh_runner.go:195] Run: cat /version.json
	I1027 20:01:34.793533  462995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629838
	I1027 20:01:34.793875  462995 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 20:01:34.793933  462995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629838
	I1027 20:01:34.816324  462995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/embed-certs-629838/id_rsa Username:docker}
	I1027 20:01:34.824624  462995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/embed-certs-629838/id_rsa Username:docker}
	I1027 20:01:34.918894  462995 ssh_runner.go:195] Run: systemctl --version
	I1027 20:01:35.018128  462995 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 20:01:35.066804  462995 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 20:01:35.072709  462995 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 20:01:35.072891  462995 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 20:01:35.082744  462995 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 20:01:35.082771  462995 start.go:495] detecting cgroup driver to use...
	I1027 20:01:35.082807  462995 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1027 20:01:35.082872  462995 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 20:01:35.099518  462995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 20:01:35.113387  462995 docker.go:218] disabling cri-docker service (if available) ...
	I1027 20:01:35.113485  462995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 20:01:35.130167  462995 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 20:01:35.144237  462995 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 20:01:35.268401  462995 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 20:01:35.388675  462995 docker.go:234] disabling docker service ...
	I1027 20:01:35.388741  462995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 20:01:35.404531  462995 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 20:01:35.417444  462995 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 20:01:35.543939  462995 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 20:01:35.667500  462995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 20:01:35.680475  462995 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 20:01:35.695906  462995 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 20:01:35.696021  462995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:01:35.705823  462995 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 20:01:35.705943  462995 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:01:35.715910  462995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:01:35.725338  462995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:01:35.734556  462995 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 20:01:35.744674  462995 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:01:35.755290  462995 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:01:35.764120  462995 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:01:35.772893  462995 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 20:01:35.780449  462995 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 20:01:35.787919  462995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:01:35.914540  462995 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 20:01:36.056783  462995 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 20:01:36.056934  462995 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 20:01:36.061391  462995 start.go:563] Will wait 60s for crictl version
	I1027 20:01:36.061472  462995 ssh_runner.go:195] Run: which crictl
	I1027 20:01:36.065865  462995 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 20:01:36.097719  462995 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 20:01:36.097820  462995 ssh_runner.go:195] Run: crio --version
	I1027 20:01:36.130037  462995 ssh_runner.go:195] Run: crio --version
	I1027 20:01:36.175848  462995 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1027 20:01:33.755107  460048 pod_ready.go:104] pod "coredns-66bc5c9577-jlg4z" is not "Ready", error: <nil>
	I1027 20:01:34.752002  460048 pod_ready.go:94] pod "coredns-66bc5c9577-jlg4z" is "Ready"
	I1027 20:01:34.752028  460048 pod_ready.go:86] duration metric: took 37.505469588s for pod "coredns-66bc5c9577-jlg4z" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:01:34.755657  460048 pod_ready.go:83] waiting for pod "etcd-no-preload-300878" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:01:34.760043  460048 pod_ready.go:94] pod "etcd-no-preload-300878" is "Ready"
	I1027 20:01:34.760070  460048 pod_ready.go:86] duration metric: took 4.387643ms for pod "etcd-no-preload-300878" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:01:34.762162  460048 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-300878" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:01:34.766509  460048 pod_ready.go:94] pod "kube-apiserver-no-preload-300878" is "Ready"
	I1027 20:01:34.766537  460048 pod_ready.go:86] duration metric: took 4.348784ms for pod "kube-apiserver-no-preload-300878" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:01:34.768891  460048 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-300878" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:01:34.950790  460048 pod_ready.go:94] pod "kube-controller-manager-no-preload-300878" is "Ready"
	I1027 20:01:34.950816  460048 pod_ready.go:86] duration metric: took 181.897626ms for pod "kube-controller-manager-no-preload-300878" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:01:35.150338  460048 pod_ready.go:83] waiting for pod "kube-proxy-wpv4w" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:01:35.551499  460048 pod_ready.go:94] pod "kube-proxy-wpv4w" is "Ready"
	I1027 20:01:35.551527  460048 pod_ready.go:86] duration metric: took 401.158248ms for pod "kube-proxy-wpv4w" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:01:35.750780  460048 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-300878" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:01:36.149903  460048 pod_ready.go:94] pod "kube-scheduler-no-preload-300878" is "Ready"
	I1027 20:01:36.149930  460048 pod_ready.go:86] duration metric: took 399.123737ms for pod "kube-scheduler-no-preload-300878" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:01:36.149941  460048 pod_ready.go:40] duration metric: took 38.910741823s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 20:01:36.218433  460048 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 20:01:36.221592  460048 out.go:179] * Done! kubectl is now configured to use "no-preload-300878" cluster and "default" namespace by default
	I1027 20:01:36.178799  462995 cli_runner.go:164] Run: docker network inspect embed-certs-629838 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 20:01:36.198470  462995 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1027 20:01:36.204006  462995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 20:01:36.219940  462995 kubeadm.go:883] updating cluster {Name:embed-certs-629838 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-629838 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 20:01:36.220078  462995 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 20:01:36.220143  462995 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 20:01:36.286214  462995 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 20:01:36.286240  462995 crio.go:433] Images already preloaded, skipping extraction
	I1027 20:01:36.286297  462995 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 20:01:36.323318  462995 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 20:01:36.323343  462995 cache_images.go:85] Images are preloaded, skipping loading
	I1027 20:01:36.323351  462995 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1027 20:01:36.323461  462995 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-629838 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-629838 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 20:01:36.323552  462995 ssh_runner.go:195] Run: crio config
	I1027 20:01:36.411757  462995 cni.go:84] Creating CNI manager for ""
	I1027 20:01:36.411824  462995 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 20:01:36.411861  462995 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 20:01:36.411913  462995 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-629838 NodeName:embed-certs-629838 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 20:01:36.412175  462995 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-629838"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 20:01:36.412270  462995 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 20:01:36.420959  462995 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 20:01:36.421035  462995 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 20:01:36.435459  462995 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1027 20:01:36.450638  462995 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 20:01:36.470156  462995 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1027 20:01:36.489409  462995 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1027 20:01:36.494344  462995 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 20:01:36.507912  462995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:01:36.690261  462995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 20:01:36.707735  462995 certs.go:69] Setting up /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838 for IP: 192.168.76.2
	I1027 20:01:36.707752  462995 certs.go:195] generating shared ca certs ...
	I1027 20:01:36.707769  462995 certs.go:227] acquiring lock for ca certs: {Name:mk172548fc54811b51040cc0201d01eb4b3dd19b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:01:36.707928  462995 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21801-266035/.minikube/ca.key
	I1027 20:01:36.707973  462995 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.key
	I1027 20:01:36.707980  462995 certs.go:257] generating profile certs ...
	I1027 20:01:36.708077  462995 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/client.key
	I1027 20:01:36.708138  462995 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/apiserver.key.4ab968a1
	I1027 20:01:36.708177  462995 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/proxy-client.key
	I1027 20:01:36.708293  462995 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880.pem (1338 bytes)
	W1027 20:01:36.708322  462995 certs.go:480] ignoring /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880_empty.pem, impossibly tiny 0 bytes
	I1027 20:01:36.708330  462995 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 20:01:36.708353  462995 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem (1078 bytes)
	I1027 20:01:36.708375  462995 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem (1123 bytes)
	I1027 20:01:36.708396  462995 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem (1675 bytes)
	I1027 20:01:36.708435  462995 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem (1708 bytes)
	I1027 20:01:36.709017  462995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 20:01:36.744889  462995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 20:01:36.777584  462995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 20:01:36.794875  462995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 20:01:36.815804  462995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1027 20:01:36.834379  462995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1027 20:01:36.853708  462995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 20:01:36.882817  462995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/embed-certs-629838/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 20:01:36.902280  462995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem --> /usr/share/ca-certificates/2678802.pem (1708 bytes)
	I1027 20:01:36.920069  462995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 20:01:36.937879  462995 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880.pem --> /usr/share/ca-certificates/267880.pem (1338 bytes)
	I1027 20:01:36.966080  462995 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 20:01:36.981440  462995 ssh_runner.go:195] Run: openssl version
	I1027 20:01:36.989383  462995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 20:01:36.999806  462995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:01:37.007330  462995 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:01:37.007465  462995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:01:37.055315  462995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 20:01:37.063170  462995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/267880.pem && ln -fs /usr/share/ca-certificates/267880.pem /etc/ssl/certs/267880.pem"
	I1027 20:01:37.071876  462995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/267880.pem
	I1027 20:01:37.075564  462995 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 19:04 /usr/share/ca-certificates/267880.pem
	I1027 20:01:37.075629  462995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/267880.pem
	I1027 20:01:37.117361  462995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/267880.pem /etc/ssl/certs/51391683.0"
	I1027 20:01:37.125994  462995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2678802.pem && ln -fs /usr/share/ca-certificates/2678802.pem /etc/ssl/certs/2678802.pem"
	I1027 20:01:37.135835  462995 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2678802.pem
	I1027 20:01:37.140087  462995 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 19:04 /usr/share/ca-certificates/2678802.pem
	I1027 20:01:37.140247  462995 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2678802.pem
	I1027 20:01:37.182159  462995 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2678802.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 20:01:37.191516  462995 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 20:01:37.195534  462995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 20:01:37.238399  462995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 20:01:37.279918  462995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 20:01:37.321139  462995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 20:01:37.362872  462995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 20:01:37.403995  462995 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 20:01:37.447091  462995 kubeadm.go:400] StartCluster: {Name:embed-certs-629838 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-629838 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 20:01:37.447188  462995 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 20:01:37.447308  462995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 20:01:37.490540  462995 cri.go:89] found id: ""
	I1027 20:01:37.490640  462995 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 20:01:37.500149  462995 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1027 20:01:37.500171  462995 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1027 20:01:37.500235  462995 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 20:01:37.518670  462995 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 20:01:37.519273  462995 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-629838" does not appear in /home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 20:01:37.519600  462995 kubeconfig.go:62] /home/jenkins/minikube-integration/21801-266035/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-629838" cluster setting kubeconfig missing "embed-certs-629838" context setting]
	I1027 20:01:37.520107  462995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/kubeconfig: {Name:mk0a03a594f8d9f9199b1c7cff7c550cec8414c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:01:37.521481  462995 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 20:01:37.537828  462995 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1027 20:01:37.537874  462995 kubeadm.go:601] duration metric: took 37.687919ms to restartPrimaryControlPlane
	I1027 20:01:37.537884  462995 kubeadm.go:402] duration metric: took 90.812698ms to StartCluster
	I1027 20:01:37.537926  462995 settings.go:142] acquiring lock: {Name:mk49b9e2e9b7901c6b51e89b8b1e8ee7f9e88107 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:01:37.538009  462995 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 20:01:37.539429  462995 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/kubeconfig: {Name:mk0a03a594f8d9f9199b1c7cff7c550cec8414c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:01:37.539929  462995 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 20:01:37.540195  462995 config.go:182] Loaded profile config "embed-certs-629838": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:01:37.540346  462995 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 20:01:37.540426  462995 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-629838"
	I1027 20:01:37.540439  462995 addons.go:69] Setting dashboard=true in profile "embed-certs-629838"
	I1027 20:01:37.540446  462995 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-629838"
	I1027 20:01:37.540452  462995 addons.go:238] Setting addon dashboard=true in "embed-certs-629838"
	W1027 20:01:37.540459  462995 addons.go:247] addon dashboard should already be in state true
	W1027 20:01:37.540453  462995 addons.go:247] addon storage-provisioner should already be in state true
	I1027 20:01:37.540486  462995 host.go:66] Checking if "embed-certs-629838" exists ...
	I1027 20:01:37.540493  462995 host.go:66] Checking if "embed-certs-629838" exists ...
	I1027 20:01:37.540960  462995 cli_runner.go:164] Run: docker container inspect embed-certs-629838 --format={{.State.Status}}
	I1027 20:01:37.540970  462995 cli_runner.go:164] Run: docker container inspect embed-certs-629838 --format={{.State.Status}}
	I1027 20:01:37.540459  462995 addons.go:69] Setting default-storageclass=true in profile "embed-certs-629838"
	I1027 20:01:37.541523  462995 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-629838"
	I1027 20:01:37.541783  462995 cli_runner.go:164] Run: docker container inspect embed-certs-629838 --format={{.State.Status}}
	I1027 20:01:37.548701  462995 out.go:179] * Verifying Kubernetes components...
	I1027 20:01:37.565625  462995 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:01:37.593461  462995 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 20:01:37.596859  462995 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1027 20:01:37.596975  462995 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 20:01:37.596986  462995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 20:01:37.597051  462995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629838
	I1027 20:01:37.605190  462995 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1027 20:01:37.608131  462995 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1027 20:01:37.608157  462995 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1027 20:01:37.608225  462995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629838
	I1027 20:01:37.616441  462995 addons.go:238] Setting addon default-storageclass=true in "embed-certs-629838"
	W1027 20:01:37.616464  462995 addons.go:247] addon default-storageclass should already be in state true
	I1027 20:01:37.616487  462995 host.go:66] Checking if "embed-certs-629838" exists ...
	I1027 20:01:37.616914  462995 cli_runner.go:164] Run: docker container inspect embed-certs-629838 --format={{.State.Status}}
	I1027 20:01:37.644963  462995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/embed-certs-629838/id_rsa Username:docker}
	I1027 20:01:37.668778  462995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/embed-certs-629838/id_rsa Username:docker}
	I1027 20:01:37.680851  462995 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 20:01:37.680873  462995 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 20:01:37.680932  462995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629838
	I1027 20:01:37.706299  462995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/embed-certs-629838/id_rsa Username:docker}
	I1027 20:01:37.908919  462995 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 20:01:37.932674  462995 node_ready.go:35] waiting up to 6m0s for node "embed-certs-629838" to be "Ready" ...
	I1027 20:01:37.965532  462995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 20:01:38.039363  462995 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1027 20:01:38.039387  462995 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1027 20:01:38.081538  462995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 20:01:38.121006  462995 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1027 20:01:38.121075  462995 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1027 20:01:38.162751  462995 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1027 20:01:38.162825  462995 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1027 20:01:38.217182  462995 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1027 20:01:38.217253  462995 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1027 20:01:38.261360  462995 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1027 20:01:38.261430  462995 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1027 20:01:38.274878  462995 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1027 20:01:38.274949  462995 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1027 20:01:38.288704  462995 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1027 20:01:38.288776  462995 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1027 20:01:38.301449  462995 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1027 20:01:38.301520  462995 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1027 20:01:38.315677  462995 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 20:01:38.315746  462995 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1027 20:01:38.328405  462995 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 20:01:42.487268  462995 node_ready.go:49] node "embed-certs-629838" is "Ready"
	I1027 20:01:42.487304  462995 node_ready.go:38] duration metric: took 4.554590314s for node "embed-certs-629838" to be "Ready" ...
	I1027 20:01:42.487317  462995 api_server.go:52] waiting for apiserver process to appear ...
	I1027 20:01:42.487376  462995 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 20:01:44.067599  462995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.102033779s)
	I1027 20:01:44.067711  462995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.986100965s)
	I1027 20:01:44.132211  462995 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.803719799s)
	I1027 20:01:44.132449  462995 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.645057063s)
	I1027 20:01:44.132489  462995 api_server.go:72] duration metric: took 6.592524488s to wait for apiserver process to appear ...
	I1027 20:01:44.132510  462995 api_server.go:88] waiting for apiserver healthz status ...
	I1027 20:01:44.132542  462995 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 20:01:44.135382  462995 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-629838 addons enable metrics-server
	
	I1027 20:01:44.138270  462995 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1027 20:01:44.141108  462995 addons.go:514] duration metric: took 6.600758834s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1027 20:01:44.145345  462995 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 20:01:44.145367  462995 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 20:01:44.632927  462995 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 20:01:44.641645  462995 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1027 20:01:44.642699  462995 api_server.go:141] control plane version: v1.34.1
	I1027 20:01:44.642758  462995 api_server.go:131] duration metric: took 510.228202ms to wait for apiserver health ...
	I1027 20:01:44.642798  462995 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 20:01:44.654508  462995 system_pods.go:59] 8 kube-system pods found
	I1027 20:01:44.654623  462995 system_pods.go:61] "coredns-66bc5c9577-ch8jv" [31b0e0f4-af1b-40c7-9b20-0941025a0e20] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:01:44.654649  462995 system_pods.go:61] "etcd-embed-certs-629838" [8a3ea38f-5f58-4bc1-92a6-f623c5bbf89a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 20:01:44.654684  462995 system_pods.go:61] "kindnet-cfqpk" [26bd03b9-d7d1-42d3-9a0f-57a0079df4df] Running
	I1027 20:01:44.654715  462995 system_pods.go:61] "kube-apiserver-embed-certs-629838" [5a679f7d-bd7f-41f2-8494-368533168d94] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 20:01:44.654740  462995 system_pods.go:61] "kube-controller-manager-embed-certs-629838" [208c550a-f378-40f6-aa34-d3e6f628ec0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 20:01:44.654763  462995 system_pods.go:61] "kube-proxy-bwql6" [41eb9367-187c-4241-967f-46d7e5ff9003] Running
	I1027 20:01:44.654799  462995 system_pods.go:61] "kube-scheduler-embed-certs-629838" [79fc7d25-8b0a-46c9-82f0-714850ebf675] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 20:01:44.654837  462995 system_pods.go:61] "storage-provisioner" [39cc3c46-ef65-4c2e-8c82-6273f639f702] Running
	I1027 20:01:44.654860  462995 system_pods.go:74] duration metric: took 12.037833ms to wait for pod list to return data ...
	I1027 20:01:44.654882  462995 default_sa.go:34] waiting for default service account to be created ...
	I1027 20:01:44.660144  462995 default_sa.go:45] found service account: "default"
	I1027 20:01:44.660207  462995 default_sa.go:55] duration metric: took 5.291602ms for default service account to be created ...
	I1027 20:01:44.660232  462995 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 20:01:44.665243  462995 system_pods.go:86] 8 kube-system pods found
	I1027 20:01:44.665328  462995 system_pods.go:89] "coredns-66bc5c9577-ch8jv" [31b0e0f4-af1b-40c7-9b20-0941025a0e20] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:01:44.665355  462995 system_pods.go:89] "etcd-embed-certs-629838" [8a3ea38f-5f58-4bc1-92a6-f623c5bbf89a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 20:01:44.665396  462995 system_pods.go:89] "kindnet-cfqpk" [26bd03b9-d7d1-42d3-9a0f-57a0079df4df] Running
	I1027 20:01:44.665427  462995 system_pods.go:89] "kube-apiserver-embed-certs-629838" [5a679f7d-bd7f-41f2-8494-368533168d94] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 20:01:44.665453  462995 system_pods.go:89] "kube-controller-manager-embed-certs-629838" [208c550a-f378-40f6-aa34-d3e6f628ec0b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 20:01:44.665475  462995 system_pods.go:89] "kube-proxy-bwql6" [41eb9367-187c-4241-967f-46d7e5ff9003] Running
	I1027 20:01:44.665511  462995 system_pods.go:89] "kube-scheduler-embed-certs-629838" [79fc7d25-8b0a-46c9-82f0-714850ebf675] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 20:01:44.665541  462995 system_pods.go:89] "storage-provisioner" [39cc3c46-ef65-4c2e-8c82-6273f639f702] Running
	I1027 20:01:44.665566  462995 system_pods.go:126] duration metric: took 5.314813ms to wait for k8s-apps to be running ...
	I1027 20:01:44.665588  462995 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 20:01:44.665672  462995 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 20:01:44.679720  462995 system_svc.go:56] duration metric: took 14.123371ms WaitForService to wait for kubelet
	I1027 20:01:44.679756  462995 kubeadm.go:586] duration metric: took 7.13979002s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 20:01:44.679776  462995 node_conditions.go:102] verifying NodePressure condition ...
	I1027 20:01:44.682851  462995 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 20:01:44.682880  462995 node_conditions.go:123] node cpu capacity is 2
	I1027 20:01:44.682901  462995 node_conditions.go:105] duration metric: took 3.111388ms to run NodePressure ...
	I1027 20:01:44.682915  462995 start.go:241] waiting for startup goroutines ...
	I1027 20:01:44.682926  462995 start.go:246] waiting for cluster config update ...
	I1027 20:01:44.682937  462995 start.go:255] writing updated cluster config ...
	I1027 20:01:44.683329  462995 ssh_runner.go:195] Run: rm -f paused
	I1027 20:01:44.687463  462995 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 20:01:44.692303  462995 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ch8jv" in "kube-system" namespace to be "Ready" or be gone ...
	W1027 20:01:46.697902  462995 pod_ready.go:104] pod "coredns-66bc5c9577-ch8jv" is not "Ready", error: <nil>
	W1027 20:01:48.708297  462995 pod_ready.go:104] pod "coredns-66bc5c9577-ch8jv" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 27 20:01:20 no-preload-300878 crio[650]: time="2025-10-27T20:01:20.724724221Z" level=info msg="Removed container 68caf56d837a229d05c8f1654e944dba99fa24fa78befabc04c543175571b2ad: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p8q7f/dashboard-metrics-scraper" id=d742916f-69e3-42be-a116-bcdcf2f6c3d3 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 20:01:26 no-preload-300878 conmon[1139]: conmon ce4b2f0831d4b6d80de7 <ninfo>: container 1146 exited with status 1
	Oct 27 20:01:26 no-preload-300878 crio[650]: time="2025-10-27T20:01:26.720403968Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=33c6561d-da46-48b5-b4fa-40c7b8af9c7c name=/runtime.v1.ImageService/ImageStatus
	Oct 27 20:01:26 no-preload-300878 crio[650]: time="2025-10-27T20:01:26.721902273Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1f40684e-b23b-47e3-8cba-96f76a9ce5b1 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 20:01:26 no-preload-300878 crio[650]: time="2025-10-27T20:01:26.723772719Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=d19a84e3-6e0f-463d-a097-85b3d73dcadc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 20:01:26 no-preload-300878 crio[650]: time="2025-10-27T20:01:26.724016215Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:01:26 no-preload-300878 crio[650]: time="2025-10-27T20:01:26.73052631Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:01:26 no-preload-300878 crio[650]: time="2025-10-27T20:01:26.730846481Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/08993c2fea66d54883bfee7248cb629ecc8f4582be7b4b473609cb4c530d3969/merged/etc/passwd: no such file or directory"
	Oct 27 20:01:26 no-preload-300878 crio[650]: time="2025-10-27T20:01:26.730941814Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/08993c2fea66d54883bfee7248cb629ecc8f4582be7b4b473609cb4c530d3969/merged/etc/group: no such file or directory"
	Oct 27 20:01:26 no-preload-300878 crio[650]: time="2025-10-27T20:01:26.731267803Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:01:26 no-preload-300878 crio[650]: time="2025-10-27T20:01:26.751356658Z" level=info msg="Created container b445fba8c8ac67334825dd1de6351eea9e0bc3f983f24bfdffca5c882248e749: kube-system/storage-provisioner/storage-provisioner" id=d19a84e3-6e0f-463d-a097-85b3d73dcadc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 20:01:26 no-preload-300878 crio[650]: time="2025-10-27T20:01:26.756295851Z" level=info msg="Starting container: b445fba8c8ac67334825dd1de6351eea9e0bc3f983f24bfdffca5c882248e749" id=9e6a4105-ce58-4491-a716-5a9856025d8e name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 20:01:26 no-preload-300878 crio[650]: time="2025-10-27T20:01:26.758002659Z" level=info msg="Started container" PID=1626 containerID=b445fba8c8ac67334825dd1de6351eea9e0bc3f983f24bfdffca5c882248e749 description=kube-system/storage-provisioner/storage-provisioner id=9e6a4105-ce58-4491-a716-5a9856025d8e name=/runtime.v1.RuntimeService/StartContainer sandboxID=29561a3cf2bbe151847e8a3e42dfee256bcf867b8934334749987b9529e1211a
	Oct 27 20:01:36 no-preload-300878 crio[650]: time="2025-10-27T20:01:36.566874405Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 20:01:36 no-preload-300878 crio[650]: time="2025-10-27T20:01:36.573071482Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 20:01:36 no-preload-300878 crio[650]: time="2025-10-27T20:01:36.573095572Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 20:01:36 no-preload-300878 crio[650]: time="2025-10-27T20:01:36.573115526Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 20:01:36 no-preload-300878 crio[650]: time="2025-10-27T20:01:36.581435161Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 20:01:36 no-preload-300878 crio[650]: time="2025-10-27T20:01:36.581469999Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 20:01:36 no-preload-300878 crio[650]: time="2025-10-27T20:01:36.58148819Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 20:01:36 no-preload-300878 crio[650]: time="2025-10-27T20:01:36.587836344Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 20:01:36 no-preload-300878 crio[650]: time="2025-10-27T20:01:36.587869508Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 20:01:36 no-preload-300878 crio[650]: time="2025-10-27T20:01:36.587890964Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 20:01:36 no-preload-300878 crio[650]: time="2025-10-27T20:01:36.599833291Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 20:01:36 no-preload-300878 crio[650]: time="2025-10-27T20:01:36.599868318Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	b445fba8c8ac6       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           27 seconds ago       Running             storage-provisioner         2                   29561a3cf2bbe       storage-provisioner                          kube-system
	ff4d1e7aa0afd       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           34 seconds ago       Exited              dashboard-metrics-scraper   2                   44ac3d2be1273       dashboard-metrics-scraper-6ffb444bf9-p8q7f   kubernetes-dashboard
	845ed893ecacd       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   43 seconds ago       Running             kubernetes-dashboard        0                   12950a2f56cc5       kubernetes-dashboard-855c9754f9-hqxgb        kubernetes-dashboard
	b6ddf809f10be       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   2cf02bb1a4d05       busybox                                      default
	e2edb66752f03       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           58 seconds ago       Running             kube-proxy                  1                   b5380d652fbbd       kube-proxy-wpv4w                             kube-system
	ce4b2f0831d4b       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           58 seconds ago       Exited              storage-provisioner         1                   29561a3cf2bbe       storage-provisioner                          kube-system
	3627ed707f2d5       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           58 seconds ago       Running             coredns                     1                   f4a9e567480ee       coredns-66bc5c9577-jlg4z                     kube-system
	dab65465dca39       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           58 seconds ago       Running             kindnet-cni                 1                   8e47a9a281004       kindnet-smnp2                                kube-system
	13d030edd8243       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   53fc1fe688275       kube-apiserver-no-preload-300878             kube-system
	e280576b7cd34       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   a71254e85ef08       etcd-no-preload-300878                       kube-system
	2e749e0d2383f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   2e55a0b642cf6       kube-controller-manager-no-preload-300878    kube-system
	75c7134600c5c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   a8799155ff185       kube-scheduler-no-preload-300878             kube-system
	
	
	==> coredns [3627ed707f2d56dfd79e2f7904b8af77c14c72df05c03340c5194af8a728a9c5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35597 - 966 "HINFO IN 3528978051479716237.7348152511424593718. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028944205s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-300878
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-300878
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=no-preload-300878
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T19_59_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 19:59:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-300878
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 20:01:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 20:01:36 +0000   Mon, 27 Oct 2025 19:59:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 20:01:36 +0000   Mon, 27 Oct 2025 19:59:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 20:01:36 +0000   Mon, 27 Oct 2025 19:59:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 20:01:36 +0000   Mon, 27 Oct 2025 20:00:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-300878
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                efc50928-8e8e-470b-97b1-2b65f64ae45b
	  Boot ID:                    23ea05b4-8203-4fb9-a84a-5deb4b091cbb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-jlg4z                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     117s
	  kube-system                 etcd-no-preload-300878                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m5s
	  kube-system                 kindnet-smnp2                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      117s
	  kube-system                 kube-apiserver-no-preload-300878              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-controller-manager-no-preload-300878     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-proxy-wpv4w                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-scheduler-no-preload-300878              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-p8q7f    0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-hqxgb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 115s                   kube-proxy       
	  Normal   Starting                 57s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m12s (x8 over 2m12s)  kubelet          Node no-preload-300878 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m12s (x8 over 2m12s)  kubelet          Node no-preload-300878 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m12s (x8 over 2m12s)  kubelet          Node no-preload-300878 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m3s                   kubelet          Node no-preload-300878 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m3s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m3s                   kubelet          Node no-preload-300878 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m3s                   kubelet          Node no-preload-300878 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m3s                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           118s                   node-controller  Node no-preload-300878 event: Registered Node no-preload-300878 in Controller
	  Normal   NodeReady                100s                   kubelet          Node no-preload-300878 status is now: NodeReady
	  Normal   Starting                 64s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 64s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  64s (x8 over 64s)      kubelet          Node no-preload-300878 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    64s (x8 over 64s)      kubelet          Node no-preload-300878 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     64s (x8 over 64s)      kubelet          Node no-preload-300878 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                    node-controller  Node no-preload-300878 event: Registered Node no-preload-300878 in Controller
	
	
	==> dmesg <==
	[Oct27 19:37] overlayfs: idmapped layers are currently not supported
	[Oct27 19:38] overlayfs: idmapped layers are currently not supported
	[Oct27 19:39] overlayfs: idmapped layers are currently not supported
	[Oct27 19:40] overlayfs: idmapped layers are currently not supported
	[  +7.015891] overlayfs: idmapped layers are currently not supported
	[Oct27 19:41] overlayfs: idmapped layers are currently not supported
	[ +28.455216] overlayfs: idmapped layers are currently not supported
	[Oct27 19:42] overlayfs: idmapped layers are currently not supported
	[ +26.490174] overlayfs: idmapped layers are currently not supported
	[Oct27 19:44] overlayfs: idmapped layers are currently not supported
	[Oct27 19:45] overlayfs: idmapped layers are currently not supported
	[Oct27 19:47] overlayfs: idmapped layers are currently not supported
	[Oct27 19:49] overlayfs: idmapped layers are currently not supported
	[ +31.410335] overlayfs: idmapped layers are currently not supported
	[Oct27 19:51] overlayfs: idmapped layers are currently not supported
	[Oct27 19:53] overlayfs: idmapped layers are currently not supported
	[Oct27 19:54] overlayfs: idmapped layers are currently not supported
	[Oct27 19:55] overlayfs: idmapped layers are currently not supported
	[Oct27 19:56] overlayfs: idmapped layers are currently not supported
	[Oct27 19:57] overlayfs: idmapped layers are currently not supported
	[Oct27 19:58] overlayfs: idmapped layers are currently not supported
	[Oct27 19:59] overlayfs: idmapped layers are currently not supported
	[Oct27 20:00] overlayfs: idmapped layers are currently not supported
	[ +41.321877] overlayfs: idmapped layers are currently not supported
	[Oct27 20:01] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e280576b7cd349242f75cfabe39f57b429b7f1be59a692085c1ac72054e39d40] <==
	{"level":"warn","ts":"2025-10-27T20:00:53.796242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:53.850575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:53.854851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:53.871568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:53.892631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:53.911266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:53.923966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:53.944707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:53.959898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:53.978408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:54.005220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:54.023055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:54.042902Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:54.059075Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:54.073909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:54.091298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:54.108519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:54.132121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:54.146322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:54.192119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:54.236838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:54.269469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:54.286282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:54.303394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:00:54.374275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58258","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:01:55 up  2:44,  0 user,  load average: 2.81, 2.98, 2.62
	Linux no-preload-300878 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [dab65465dca396808c706937f4961cc55e7b5490a396435f7a5ce712e477451c] <==
	I1027 20:00:56.391570       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 20:00:56.391797       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1027 20:00:56.391931       1 main.go:148] setting mtu 1500 for CNI 
	I1027 20:00:56.391943       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 20:00:56.391954       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T20:00:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 20:00:56.565743       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 20:00:56.565767       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 20:00:56.565776       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 20:00:56.619556       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1027 20:01:26.565966       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1027 20:01:26.619673       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1027 20:01:26.619673       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1027 20:01:26.619873       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1027 20:01:28.166266       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 20:01:28.166308       1 metrics.go:72] Registering metrics
	I1027 20:01:28.166382       1 controller.go:711] "Syncing nftables rules"
	I1027 20:01:36.565390       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 20:01:36.565436       1 main.go:301] handling current node
	I1027 20:01:46.567051       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 20:01:46.567195       1 main.go:301] handling current node
	
	
	==> kube-apiserver [13d030edd8243f160331166818aa22d925ebe9d23d02c061f90430ac5760b9ea] <==
	I1027 20:00:55.596274       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1027 20:00:55.596314       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1027 20:00:55.596350       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 20:00:55.603882       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1027 20:00:55.604682       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1027 20:00:55.604763       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1027 20:00:55.618706       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1027 20:00:55.619292       1 aggregator.go:171] initial CRD sync complete...
	I1027 20:00:55.619309       1 autoregister_controller.go:144] Starting autoregister controller
	I1027 20:00:55.619316       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 20:00:55.619322       1 cache.go:39] Caches are synced for autoregister controller
	I1027 20:00:55.626791       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 20:00:55.628671       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1027 20:00:55.629040       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1027 20:00:55.755021       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1027 20:00:56.097239       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 20:00:56.833892       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 20:00:56.900985       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 20:00:56.937648       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 20:00:56.954796       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 20:00:57.092408       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.45.112"}
	I1027 20:00:57.128858       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.122.251"}
	I1027 20:00:58.614193       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 20:00:59.108240       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 20:00:59.214204       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [2e749e0d2383f2957caa1a338e460f11d3d14d875b53417f4e0cd2479aad76e0] <==
	I1027 20:00:58.615572       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1027 20:00:58.614864       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1027 20:00:58.618225       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 20:00:58.621293       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 20:00:58.626725       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1027 20:00:58.627385       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1027 20:00:58.633670       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 20:00:58.633779       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1027 20:00:58.633679       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 20:00:58.636387       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 20:00:58.636773       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1027 20:00:58.640101       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 20:00:58.644883       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 20:00:58.651497       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 20:00:58.652655       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1027 20:00:58.652664       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 20:00:58.652813       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 20:00:58.652841       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1027 20:00:58.652965       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 20:00:58.653277       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-300878"
	I1027 20:00:58.653352       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1027 20:00:58.657694       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 20:00:58.666897       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 20:00:59.114748       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	I1027 20:00:59.114955       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [e2edb66752f0320ca15324996f37b99874c7495a2aef8abe85781a5d7bfa18cf] <==
	I1027 20:00:56.963899       1 server_linux.go:53] "Using iptables proxy"
	I1027 20:00:57.192843       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 20:00:57.293982       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 20:00:57.295145       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1027 20:00:57.295254       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 20:00:57.340945       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 20:00:57.340995       1 server_linux.go:132] "Using iptables Proxier"
	I1027 20:00:57.348701       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 20:00:57.349175       1 server.go:527] "Version info" version="v1.34.1"
	I1027 20:00:57.349198       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 20:00:57.355749       1 config.go:106] "Starting endpoint slice config controller"
	I1027 20:00:57.355772       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 20:00:57.356056       1 config.go:200] "Starting service config controller"
	I1027 20:00:57.356071       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 20:00:57.358406       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 20:00:57.358436       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 20:00:57.359504       1 config.go:309] "Starting node config controller"
	I1027 20:00:57.359522       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 20:00:57.359529       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 20:00:57.456560       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 20:00:57.456662       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 20:00:57.458595       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [75c7134600c5c012795172e75d3a8a7ce7dfa5f7d06a557943649171a009abd6] <==
	I1027 20:00:54.633840       1 serving.go:386] Generated self-signed cert in-memory
	I1027 20:00:57.186845       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 20:00:57.186880       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 20:00:57.203837       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 20:00:57.206081       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1027 20:00:57.206202       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1027 20:00:57.206267       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 20:00:57.211234       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 20:00:57.211276       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 20:00:57.211298       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 20:00:57.211304       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 20:00:57.306476       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1027 20:00:57.311897       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 20:00:57.312004       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 20:00:59 no-preload-300878 kubelet[767]: I1027 20:00:59.082606     767 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4w8x\" (UniqueName: \"kubernetes.io/projected/c3f77740-952e-48ea-b5fe-d07800ef585f-kube-api-access-l4w8x\") pod \"kubernetes-dashboard-855c9754f9-hqxgb\" (UID: \"c3f77740-952e-48ea-b5fe-d07800ef585f\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hqxgb"
	Oct 27 20:00:59 no-preload-300878 kubelet[767]: W1027 20:00:59.361711     767 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5f7533431bd6ad05927fc6ee4ffbe127ba4e775919d072603a24e3cd4e1d5d89/crio-44ac3d2be12733e7e06d23f818c327414a507746ad4091e630930a952398a8dc WatchSource:0}: Error finding container 44ac3d2be12733e7e06d23f818c327414a507746ad4091e630930a952398a8dc: Status 404 returned error can't find the container with id 44ac3d2be12733e7e06d23f818c327414a507746ad4091e630930a952398a8dc
	Oct 27 20:00:59 no-preload-300878 kubelet[767]: W1027 20:00:59.363412     767 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5f7533431bd6ad05927fc6ee4ffbe127ba4e775919d072603a24e3cd4e1d5d89/crio-12950a2f56cc558942c2b3d8792e5546d8432ff771d4e5cd01cd9e8906e265d0 WatchSource:0}: Error finding container 12950a2f56cc558942c2b3d8792e5546d8432ff771d4e5cd01cd9e8906e265d0: Status 404 returned error can't find the container with id 12950a2f56cc558942c2b3d8792e5546d8432ff771d4e5cd01cd9e8906e265d0
	Oct 27 20:01:04 no-preload-300878 kubelet[767]: I1027 20:01:04.584092     767 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 27 20:01:04 no-preload-300878 kubelet[767]: I1027 20:01:04.654647     767 scope.go:117] "RemoveContainer" containerID="791b4f9fec6840e7391b4608ef4231a34141f6988492418ab62f86aec29bb939"
	Oct 27 20:01:05 no-preload-300878 kubelet[767]: I1027 20:01:05.658809     767 scope.go:117] "RemoveContainer" containerID="791b4f9fec6840e7391b4608ef4231a34141f6988492418ab62f86aec29bb939"
	Oct 27 20:01:05 no-preload-300878 kubelet[767]: I1027 20:01:05.658946     767 scope.go:117] "RemoveContainer" containerID="68caf56d837a229d05c8f1654e944dba99fa24fa78befabc04c543175571b2ad"
	Oct 27 20:01:05 no-preload-300878 kubelet[767]: E1027 20:01:05.659154     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p8q7f_kubernetes-dashboard(76bb9163-da1c-4c9e-8453-a56a9f293563)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p8q7f" podUID="76bb9163-da1c-4c9e-8453-a56a9f293563"
	Oct 27 20:01:06 no-preload-300878 kubelet[767]: I1027 20:01:06.663988     767 scope.go:117] "RemoveContainer" containerID="68caf56d837a229d05c8f1654e944dba99fa24fa78befabc04c543175571b2ad"
	Oct 27 20:01:06 no-preload-300878 kubelet[767]: E1027 20:01:06.665641     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p8q7f_kubernetes-dashboard(76bb9163-da1c-4c9e-8453-a56a9f293563)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p8q7f" podUID="76bb9163-da1c-4c9e-8453-a56a9f293563"
	Oct 27 20:01:08 no-preload-300878 kubelet[767]: I1027 20:01:08.924519     767 scope.go:117] "RemoveContainer" containerID="68caf56d837a229d05c8f1654e944dba99fa24fa78befabc04c543175571b2ad"
	Oct 27 20:01:08 no-preload-300878 kubelet[767]: E1027 20:01:08.924734     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p8q7f_kubernetes-dashboard(76bb9163-da1c-4c9e-8453-a56a9f293563)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p8q7f" podUID="76bb9163-da1c-4c9e-8453-a56a9f293563"
	Oct 27 20:01:20 no-preload-300878 kubelet[767]: I1027 20:01:20.506610     767 scope.go:117] "RemoveContainer" containerID="68caf56d837a229d05c8f1654e944dba99fa24fa78befabc04c543175571b2ad"
	Oct 27 20:01:20 no-preload-300878 kubelet[767]: I1027 20:01:20.703222     767 scope.go:117] "RemoveContainer" containerID="68caf56d837a229d05c8f1654e944dba99fa24fa78befabc04c543175571b2ad"
	Oct 27 20:01:20 no-preload-300878 kubelet[767]: I1027 20:01:20.703480     767 scope.go:117] "RemoveContainer" containerID="ff4d1e7aa0afdad72f13361ebbf1d1951f4d0389c1da1547bb8463d6cbfc6973"
	Oct 27 20:01:20 no-preload-300878 kubelet[767]: E1027 20:01:20.703655     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p8q7f_kubernetes-dashboard(76bb9163-da1c-4c9e-8453-a56a9f293563)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p8q7f" podUID="76bb9163-da1c-4c9e-8453-a56a9f293563"
	Oct 27 20:01:20 no-preload-300878 kubelet[767]: I1027 20:01:20.722626     767 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hqxgb" podStartSLOduration=10.875250099 podStartE2EDuration="22.72261022s" podCreationTimestamp="2025-10-27 20:00:58 +0000 UTC" firstStartedPulling="2025-10-27 20:00:59.368836026 +0000 UTC m=+9.074621729" lastFinishedPulling="2025-10-27 20:01:11.216196139 +0000 UTC m=+20.921981850" observedRunningTime="2025-10-27 20:01:11.69300447 +0000 UTC m=+21.398790173" watchObservedRunningTime="2025-10-27 20:01:20.72261022 +0000 UTC m=+30.428395923"
	Oct 27 20:01:26 no-preload-300878 kubelet[767]: I1027 20:01:26.719827     767 scope.go:117] "RemoveContainer" containerID="ce4b2f0831d4b6d80de7c02268477bd3775a8d23bb376b0d95e7a73ee6e7f12f"
	Oct 27 20:01:28 no-preload-300878 kubelet[767]: I1027 20:01:28.925100     767 scope.go:117] "RemoveContainer" containerID="ff4d1e7aa0afdad72f13361ebbf1d1951f4d0389c1da1547bb8463d6cbfc6973"
	Oct 27 20:01:28 no-preload-300878 kubelet[767]: E1027 20:01:28.925276     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p8q7f_kubernetes-dashboard(76bb9163-da1c-4c9e-8453-a56a9f293563)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p8q7f" podUID="76bb9163-da1c-4c9e-8453-a56a9f293563"
	Oct 27 20:01:40 no-preload-300878 kubelet[767]: I1027 20:01:40.507261     767 scope.go:117] "RemoveContainer" containerID="ff4d1e7aa0afdad72f13361ebbf1d1951f4d0389c1da1547bb8463d6cbfc6973"
	Oct 27 20:01:40 no-preload-300878 kubelet[767]: E1027 20:01:40.507458     767 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-p8q7f_kubernetes-dashboard(76bb9163-da1c-4c9e-8453-a56a9f293563)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-p8q7f" podUID="76bb9163-da1c-4c9e-8453-a56a9f293563"
	Oct 27 20:01:48 no-preload-300878 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 20:01:48 no-preload-300878 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 20:01:48 no-preload-300878 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [845ed893ecacdcea3dadb975971aab3db09c283ebdfc1d1660e45965d8599714] <==
	2025/10/27 20:01:11 Using namespace: kubernetes-dashboard
	2025/10/27 20:01:11 Using in-cluster config to connect to apiserver
	2025/10/27 20:01:11 Using secret token for csrf signing
	2025/10/27 20:01:11 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/27 20:01:11 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/27 20:01:11 Successful initial request to the apiserver, version: v1.34.1
	2025/10/27 20:01:11 Generating JWE encryption key
	2025/10/27 20:01:11 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/27 20:01:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/27 20:01:11 Initializing JWE encryption key from synchronized object
	2025/10/27 20:01:11 Creating in-cluster Sidecar client
	2025/10/27 20:01:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 20:01:11 Serving insecurely on HTTP port: 9090
	2025/10/27 20:01:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 20:01:11 Starting overwatch
	
	
	==> storage-provisioner [b445fba8c8ac67334825dd1de6351eea9e0bc3f983f24bfdffca5c882248e749] <==
	I1027 20:01:26.790014       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 20:01:26.790131       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1027 20:01:26.792453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:30.252141       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:34.514394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:38.112894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:41.166679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:44.193710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:44.198505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 20:01:44.198682       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 20:01:44.199136       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3c1092eb-a7a2-455f-b121-d7c4d1adde3a", APIVersion:"v1", ResourceVersion:"677", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-300878_b9d259fd-3af8-4b1c-abe6-c933eb8312e2 became leader
	I1027 20:01:44.199372       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-300878_b9d259fd-3af8-4b1c-abe6-c933eb8312e2!
	W1027 20:01:44.211038       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:44.227400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 20:01:44.299945       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-300878_b9d259fd-3af8-4b1c-abe6-c933eb8312e2!
	W1027 20:01:46.230663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:46.237821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:48.241728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:48.252288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:50.255329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:50.267783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:52.270606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:52.276519       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:54.282262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:01:54.295620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ce4b2f0831d4b6d80de7c02268477bd3775a8d23bb376b0d95e7a73ee6e7f12f] <==
	I1027 20:00:56.444380       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1027 20:01:26.487246       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-300878 -n no-preload-300878
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-300878 -n no-preload-300878: exit status 2 (491.876739ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-300878 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (8.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (7.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-629838 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-629838 --alsologtostderr -v=1: exit status 80 (2.612007564s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-629838 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 20:02:35.259781  468982 out.go:360] Setting OutFile to fd 1 ...
	I1027 20:02:35.260001  468982 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:02:35.260015  468982 out.go:374] Setting ErrFile to fd 2...
	I1027 20:02:35.260021  468982 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:02:35.260328  468982 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 20:02:35.260724  468982 out.go:368] Setting JSON to false
	I1027 20:02:35.260752  468982 mustload.go:65] Loading cluster: embed-certs-629838
	I1027 20:02:35.261828  468982 config.go:182] Loaded profile config "embed-certs-629838": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:02:35.262393  468982 cli_runner.go:164] Run: docker container inspect embed-certs-629838 --format={{.State.Status}}
	I1027 20:02:35.282415  468982 host.go:66] Checking if "embed-certs-629838" exists ...
	I1027 20:02:35.282732  468982 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 20:02:35.366666  468982 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-27 20:02:35.357364306 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 20:02:35.367440  468982 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-629838 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1027 20:02:35.371200  468982 out.go:179] * Pausing node embed-certs-629838 ... 
	I1027 20:02:35.374099  468982 host.go:66] Checking if "embed-certs-629838" exists ...
	I1027 20:02:35.374452  468982 ssh_runner.go:195] Run: systemctl --version
	I1027 20:02:35.374502  468982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629838
	I1027 20:02:35.396414  468982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/embed-certs-629838/id_rsa Username:docker}
	I1027 20:02:35.510087  468982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 20:02:35.532681  468982 pause.go:52] kubelet running: true
	I1027 20:02:35.532750  468982 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 20:02:35.838562  468982 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 20:02:35.838648  468982 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 20:02:35.920729  468982 cri.go:89] found id: "e2942edb63853d027badc94c1e3181d04390c3297cb5ee2e519fc254c1c790eb"
	I1027 20:02:35.920800  468982 cri.go:89] found id: "36598828b97e660c2e2764dd87dc4bc9566a206293908f2e298358a5c6ba4a21"
	I1027 20:02:35.920812  468982 cri.go:89] found id: "82adb2c58510baa0e605b645b22fc5bef69cd39b85ce04adde1bd01d6a2127b3"
	I1027 20:02:35.920816  468982 cri.go:89] found id: "fff55bbbe9a89678e38123d75f96796776f16fde5fd525adf749545f93e256a0"
	I1027 20:02:35.920820  468982 cri.go:89] found id: "1ec2027ec7db30a16a7dcd76b4cab74cf0f0fee27a8a8159c50979c610bb6883"
	I1027 20:02:35.920823  468982 cri.go:89] found id: "62e147a7d6f68f8e6cb774b163bbf47eeada86a4f24470d5ec3b80cc23844557"
	I1027 20:02:35.920827  468982 cri.go:89] found id: "d4ab5323a8b08831eb177e903d21bbe79ed018f63e07293ef7e383fc941ad31f"
	I1027 20:02:35.920830  468982 cri.go:89] found id: "b939b6634b4d016b4989e6e47aa2060602b0e9582bcbe6e92ed906e4e1c2d5b5"
	I1027 20:02:35.920834  468982 cri.go:89] found id: "0ad8adca28e83fc64435c7179ec7d5ad6ddbf98b75047dd679977005035865ac"
	I1027 20:02:35.920845  468982 cri.go:89] found id: "05a28f65e7306f32e6fe44d10f1eb7513a2f00be81d35bf861d871965164280c"
	I1027 20:02:35.920851  468982 cri.go:89] found id: "098bd45d59457750f47cfad92e238708c4767cf897a1ae3b97f78333c2fee810"
	I1027 20:02:35.920855  468982 cri.go:89] found id: ""
	I1027 20:02:35.920906  468982 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 20:02:35.932166  468982 retry.go:31] will retry after 231.67896ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T20:02:35Z" level=error msg="open /run/runc: no such file or directory"
	I1027 20:02:36.164694  468982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 20:02:36.179340  468982 pause.go:52] kubelet running: false
	I1027 20:02:36.179432  468982 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 20:02:36.361150  468982 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 20:02:36.361235  468982 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 20:02:36.428197  468982 cri.go:89] found id: "e2942edb63853d027badc94c1e3181d04390c3297cb5ee2e519fc254c1c790eb"
	I1027 20:02:36.428223  468982 cri.go:89] found id: "36598828b97e660c2e2764dd87dc4bc9566a206293908f2e298358a5c6ba4a21"
	I1027 20:02:36.428228  468982 cri.go:89] found id: "82adb2c58510baa0e605b645b22fc5bef69cd39b85ce04adde1bd01d6a2127b3"
	I1027 20:02:36.428232  468982 cri.go:89] found id: "fff55bbbe9a89678e38123d75f96796776f16fde5fd525adf749545f93e256a0"
	I1027 20:02:36.428235  468982 cri.go:89] found id: "1ec2027ec7db30a16a7dcd76b4cab74cf0f0fee27a8a8159c50979c610bb6883"
	I1027 20:02:36.428239  468982 cri.go:89] found id: "62e147a7d6f68f8e6cb774b163bbf47eeada86a4f24470d5ec3b80cc23844557"
	I1027 20:02:36.428242  468982 cri.go:89] found id: "d4ab5323a8b08831eb177e903d21bbe79ed018f63e07293ef7e383fc941ad31f"
	I1027 20:02:36.428245  468982 cri.go:89] found id: "b939b6634b4d016b4989e6e47aa2060602b0e9582bcbe6e92ed906e4e1c2d5b5"
	I1027 20:02:36.428248  468982 cri.go:89] found id: "0ad8adca28e83fc64435c7179ec7d5ad6ddbf98b75047dd679977005035865ac"
	I1027 20:02:36.428253  468982 cri.go:89] found id: "05a28f65e7306f32e6fe44d10f1eb7513a2f00be81d35bf861d871965164280c"
	I1027 20:02:36.428256  468982 cri.go:89] found id: "098bd45d59457750f47cfad92e238708c4767cf897a1ae3b97f78333c2fee810"
	I1027 20:02:36.428259  468982 cri.go:89] found id: ""
	I1027 20:02:36.428308  468982 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 20:02:36.439691  468982 retry.go:31] will retry after 344.312745ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T20:02:36Z" level=error msg="open /run/runc: no such file or directory"
	I1027 20:02:36.784252  468982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 20:02:36.797090  468982 pause.go:52] kubelet running: false
	I1027 20:02:36.797169  468982 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 20:02:36.957898  468982 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 20:02:36.958038  468982 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 20:02:37.052742  468982 cri.go:89] found id: "e2942edb63853d027badc94c1e3181d04390c3297cb5ee2e519fc254c1c790eb"
	I1027 20:02:37.052765  468982 cri.go:89] found id: "36598828b97e660c2e2764dd87dc4bc9566a206293908f2e298358a5c6ba4a21"
	I1027 20:02:37.052770  468982 cri.go:89] found id: "82adb2c58510baa0e605b645b22fc5bef69cd39b85ce04adde1bd01d6a2127b3"
	I1027 20:02:37.052774  468982 cri.go:89] found id: "fff55bbbe9a89678e38123d75f96796776f16fde5fd525adf749545f93e256a0"
	I1027 20:02:37.052789  468982 cri.go:89] found id: "1ec2027ec7db30a16a7dcd76b4cab74cf0f0fee27a8a8159c50979c610bb6883"
	I1027 20:02:37.052795  468982 cri.go:89] found id: "62e147a7d6f68f8e6cb774b163bbf47eeada86a4f24470d5ec3b80cc23844557"
	I1027 20:02:37.052820  468982 cri.go:89] found id: "d4ab5323a8b08831eb177e903d21bbe79ed018f63e07293ef7e383fc941ad31f"
	I1027 20:02:37.052825  468982 cri.go:89] found id: "b939b6634b4d016b4989e6e47aa2060602b0e9582bcbe6e92ed906e4e1c2d5b5"
	I1027 20:02:37.052828  468982 cri.go:89] found id: "0ad8adca28e83fc64435c7179ec7d5ad6ddbf98b75047dd679977005035865ac"
	I1027 20:02:37.052841  468982 cri.go:89] found id: "05a28f65e7306f32e6fe44d10f1eb7513a2f00be81d35bf861d871965164280c"
	I1027 20:02:37.052870  468982 cri.go:89] found id: "098bd45d59457750f47cfad92e238708c4767cf897a1ae3b97f78333c2fee810"
	I1027 20:02:37.052888  468982 cri.go:89] found id: ""
	I1027 20:02:37.052969  468982 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 20:02:37.064661  468982 retry.go:31] will retry after 414.438868ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T20:02:37Z" level=error msg="open /run/runc: no such file or directory"
	I1027 20:02:37.479989  468982 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 20:02:37.493996  468982 pause.go:52] kubelet running: false
	I1027 20:02:37.494058  468982 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 20:02:37.688519  468982 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 20:02:37.688606  468982 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 20:02:37.766315  468982 cri.go:89] found id: "e2942edb63853d027badc94c1e3181d04390c3297cb5ee2e519fc254c1c790eb"
	I1027 20:02:37.766341  468982 cri.go:89] found id: "36598828b97e660c2e2764dd87dc4bc9566a206293908f2e298358a5c6ba4a21"
	I1027 20:02:37.766363  468982 cri.go:89] found id: "82adb2c58510baa0e605b645b22fc5bef69cd39b85ce04adde1bd01d6a2127b3"
	I1027 20:02:37.766367  468982 cri.go:89] found id: "fff55bbbe9a89678e38123d75f96796776f16fde5fd525adf749545f93e256a0"
	I1027 20:02:37.766370  468982 cri.go:89] found id: "1ec2027ec7db30a16a7dcd76b4cab74cf0f0fee27a8a8159c50979c610bb6883"
	I1027 20:02:37.766374  468982 cri.go:89] found id: "62e147a7d6f68f8e6cb774b163bbf47eeada86a4f24470d5ec3b80cc23844557"
	I1027 20:02:37.766378  468982 cri.go:89] found id: "d4ab5323a8b08831eb177e903d21bbe79ed018f63e07293ef7e383fc941ad31f"
	I1027 20:02:37.766381  468982 cri.go:89] found id: "b939b6634b4d016b4989e6e47aa2060602b0e9582bcbe6e92ed906e4e1c2d5b5"
	I1027 20:02:37.766387  468982 cri.go:89] found id: "0ad8adca28e83fc64435c7179ec7d5ad6ddbf98b75047dd679977005035865ac"
	I1027 20:02:37.766393  468982 cri.go:89] found id: "05a28f65e7306f32e6fe44d10f1eb7513a2f00be81d35bf861d871965164280c"
	I1027 20:02:37.766396  468982 cri.go:89] found id: "098bd45d59457750f47cfad92e238708c4767cf897a1ae3b97f78333c2fee810"
	I1027 20:02:37.766400  468982 cri.go:89] found id: ""
	I1027 20:02:37.766446  468982 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 20:02:37.781397  468982 out.go:203] 
	W1027 20:02:37.784452  468982 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T20:02:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T20:02:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 20:02:37.784476  468982 out.go:285] * 
	* 
	W1027 20:02:37.791165  468982 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 20:02:37.794063  468982 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-629838 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-629838
helpers_test.go:243: (dbg) docker inspect embed-certs-629838:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c4f57eb9d97cb78db15678f52105eae7c06964f89c91a987456d1c3d7cf90fa0",
	        "Created": "2025-10-27T19:59:47.181587162Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 463126,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T20:01:29.541702446Z",
	            "FinishedAt": "2025-10-27T20:01:28.739237511Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/c4f57eb9d97cb78db15678f52105eae7c06964f89c91a987456d1c3d7cf90fa0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c4f57eb9d97cb78db15678f52105eae7c06964f89c91a987456d1c3d7cf90fa0/hostname",
	        "HostsPath": "/var/lib/docker/containers/c4f57eb9d97cb78db15678f52105eae7c06964f89c91a987456d1c3d7cf90fa0/hosts",
	        "LogPath": "/var/lib/docker/containers/c4f57eb9d97cb78db15678f52105eae7c06964f89c91a987456d1c3d7cf90fa0/c4f57eb9d97cb78db15678f52105eae7c06964f89c91a987456d1c3d7cf90fa0-json.log",
	        "Name": "/embed-certs-629838",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-629838:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-629838",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c4f57eb9d97cb78db15678f52105eae7c06964f89c91a987456d1c3d7cf90fa0",
	                "LowerDir": "/var/lib/docker/overlay2/11e03d0ec6d0fa1ee685eb208aa9a39493e58b461a663b0cbbf1e2b57f205fd2-init/diff:/var/lib/docker/overlay2/0567218bbe05e8b59b2aef7ad82032d6716040c1f9d9d73e7de8682101079f76/diff",
	                "MergedDir": "/var/lib/docker/overlay2/11e03d0ec6d0fa1ee685eb208aa9a39493e58b461a663b0cbbf1e2b57f205fd2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/11e03d0ec6d0fa1ee685eb208aa9a39493e58b461a663b0cbbf1e2b57f205fd2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/11e03d0ec6d0fa1ee685eb208aa9a39493e58b461a663b0cbbf1e2b57f205fd2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-629838",
	                "Source": "/var/lib/docker/volumes/embed-certs-629838/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-629838",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-629838",
	                "name.minikube.sigs.k8s.io": "embed-certs-629838",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3a3d568ff1480da30ecdc631ad9f93a95fff682a65b5eea03cd18b9069e202ae",
	            "SandboxKey": "/var/run/docker/netns/3a3d568ff148",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-629838": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:e6:84:10:19:ce",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e984493782940b4e21ac1d18681d3b8ebbf5771aadf9508ab04a1597fbf530b4",
	                    "EndpointID": "aacd75f77c1b7dfdc9a07170b1fdc23f711004e769f697cef656a1f9107cde74",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-629838",
	                        "c4f57eb9d97c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-629838 -n embed-certs-629838
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-629838 -n embed-certs-629838: exit status 2 (344.327568ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-629838 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-629838 logs -n 25: (1.47106714s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-942644 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-942644       │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │ 27 Oct 25 19:59 UTC │
	│ start   │ -p cert-expiration-280013 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-280013       │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │ 27 Oct 25 19:58 UTC │
	│ delete  │ -p cert-expiration-280013                                                                                                                                                                                                                     │ cert-expiration-280013       │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │ 27 Oct 25 19:59 UTC │
	│ start   │ -p no-preload-300878 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │ 27 Oct 25 20:00 UTC │
	│ image   │ old-k8s-version-942644 image list --format=json                                                                                                                                                                                               │ old-k8s-version-942644       │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │ 27 Oct 25 19:59 UTC │
	│ pause   │ -p old-k8s-version-942644 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-942644       │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │                     │
	│ delete  │ -p old-k8s-version-942644                                                                                                                                                                                                                     │ old-k8s-version-942644       │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │ 27 Oct 25 19:59 UTC │
	│ delete  │ -p old-k8s-version-942644                                                                                                                                                                                                                     │ old-k8s-version-942644       │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │ 27 Oct 25 19:59 UTC │
	│ start   │ -p embed-certs-629838 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │ 27 Oct 25 20:01 UTC │
	│ addons  │ enable metrics-server -p no-preload-300878 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:00 UTC │                     │
	│ stop    │ -p no-preload-300878 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:00 UTC │ 27 Oct 25 20:00 UTC │
	│ addons  │ enable dashboard -p no-preload-300878 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:00 UTC │ 27 Oct 25 20:00 UTC │
	│ start   │ -p no-preload-300878 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:00 UTC │ 27 Oct 25 20:01 UTC │
	│ addons  │ enable metrics-server -p embed-certs-629838 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │                     │
	│ stop    │ -p embed-certs-629838 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:01 UTC │
	│ addons  │ enable dashboard -p embed-certs-629838 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:01 UTC │
	│ start   │ -p embed-certs-629838 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:02 UTC │
	│ image   │ no-preload-300878 image list --format=json                                                                                                                                                                                                    │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:01 UTC │
	│ pause   │ -p no-preload-300878 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │                     │
	│ delete  │ -p no-preload-300878                                                                                                                                                                                                                          │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:01 UTC │
	│ delete  │ -p no-preload-300878                                                                                                                                                                                                                          │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:01 UTC │
	│ delete  │ -p disable-driver-mounts-230052                                                                                                                                                                                                               │ disable-driver-mounts-230052 │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │ 27 Oct 25 20:02 UTC │
	│ start   │ -p default-k8s-diff-port-073048 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-073048 │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │                     │
	│ image   │ embed-certs-629838 image list --format=json                                                                                                                                                                                                   │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │ 27 Oct 25 20:02 UTC │
	│ pause   │ -p embed-certs-629838 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 20:02:00
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 20:02:00.452179  466537 out.go:360] Setting OutFile to fd 1 ...
	I1027 20:02:00.452428  466537 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:02:00.452464  466537 out.go:374] Setting ErrFile to fd 2...
	I1027 20:02:00.452489  466537 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:02:00.452812  466537 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 20:02:00.453385  466537 out.go:368] Setting JSON to false
	I1027 20:02:00.454624  466537 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9873,"bootTime":1761585448,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1027 20:02:00.454761  466537 start.go:141] virtualization:  
	I1027 20:02:00.458836  466537 out.go:179] * [default-k8s-diff-port-073048] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 20:02:00.462551  466537 notify.go:220] Checking for updates...
	I1027 20:02:00.463063  466537 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 20:02:00.466222  466537 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 20:02:00.469387  466537 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 20:02:00.472510  466537 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-266035/.minikube
	I1027 20:02:00.475669  466537 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 20:02:00.478684  466537 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 20:02:00.482439  466537 config.go:182] Loaded profile config "embed-certs-629838": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:02:00.482576  466537 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 20:02:00.520247  466537 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 20:02:00.520455  466537 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 20:02:00.593093  466537 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-27 20:02:00.583744316 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 20:02:00.593208  466537 docker.go:318] overlay module found
	I1027 20:02:00.596422  466537 out.go:179] * Using the docker driver based on user configuration
	I1027 20:02:00.599446  466537 start.go:305] selected driver: docker
	I1027 20:02:00.599466  466537 start.go:925] validating driver "docker" against <nil>
	I1027 20:02:00.599480  466537 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 20:02:00.600231  466537 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 20:02:00.655639  466537 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-27 20:02:00.646894587 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 20:02:00.655788  466537 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1027 20:02:00.656021  466537 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 20:02:00.658955  466537 out.go:179] * Using Docker driver with root privileges
	I1027 20:02:00.661858  466537 cni.go:84] Creating CNI manager for ""
	I1027 20:02:00.661940  466537 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 20:02:00.661958  466537 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 20:02:00.662047  466537 start.go:349] cluster config:
	{Name:default-k8s-diff-port-073048 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-073048 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 20:02:00.665112  466537 out.go:179] * Starting "default-k8s-diff-port-073048" primary control-plane node in "default-k8s-diff-port-073048" cluster
	I1027 20:02:00.668059  466537 cache.go:123] Beginning downloading kic base image for docker with crio
	I1027 20:02:00.671159  466537 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 20:02:00.674097  466537 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 20:02:00.674160  466537 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1027 20:02:00.674173  466537 cache.go:58] Caching tarball of preloaded images
	I1027 20:02:00.674186  466537 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 20:02:00.674278  466537 preload.go:233] Found /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1027 20:02:00.674287  466537 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 20:02:00.674402  466537 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/config.json ...
	I1027 20:02:00.674428  466537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/config.json: {Name:mk7cebe9ec20daf0bb7cbc48e9425df7f73c402b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:02:00.698506  466537 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 20:02:00.698535  466537 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 20:02:00.698550  466537 cache.go:232] Successfully downloaded all kic artifacts
	I1027 20:02:00.698572  466537 start.go:360] acquireMachinesLock for default-k8s-diff-port-073048: {Name:mk90694371f699bc05745bfd1e2e3f9abdf20057 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 20:02:00.698674  466537 start.go:364] duration metric: took 85.905µs to acquireMachinesLock for "default-k8s-diff-port-073048"
	I1027 20:02:00.698707  466537 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-073048 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-073048 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 20:02:00.698773  466537 start.go:125] createHost starting for "" (driver="docker")
	W1027 20:02:01.198917  462995 pod_ready.go:104] pod "coredns-66bc5c9577-ch8jv" is not "Ready", error: <nil>
	W1027 20:02:03.699467  462995 pod_ready.go:104] pod "coredns-66bc5c9577-ch8jv" is not "Ready", error: <nil>
	I1027 20:02:00.702140  466537 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1027 20:02:00.702379  466537 start.go:159] libmachine.API.Create for "default-k8s-diff-port-073048" (driver="docker")
	I1027 20:02:00.702443  466537 client.go:168] LocalClient.Create starting
	I1027 20:02:00.702539  466537 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem
	I1027 20:02:00.702579  466537 main.go:141] libmachine: Decoding PEM data...
	I1027 20:02:00.702599  466537 main.go:141] libmachine: Parsing certificate...
	I1027 20:02:00.702657  466537 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem
	I1027 20:02:00.702681  466537 main.go:141] libmachine: Decoding PEM data...
	I1027 20:02:00.702692  466537 main.go:141] libmachine: Parsing certificate...
	I1027 20:02:00.703147  466537 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-073048 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1027 20:02:00.719434  466537 cli_runner.go:211] docker network inspect default-k8s-diff-port-073048 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1027 20:02:00.719516  466537 network_create.go:284] running [docker network inspect default-k8s-diff-port-073048] to gather additional debugging logs...
	I1027 20:02:00.719537  466537 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-073048
	W1027 20:02:00.735065  466537 cli_runner.go:211] docker network inspect default-k8s-diff-port-073048 returned with exit code 1
	I1027 20:02:00.735108  466537 network_create.go:287] error running [docker network inspect default-k8s-diff-port-073048]: docker network inspect default-k8s-diff-port-073048: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-073048 not found
	I1027 20:02:00.735123  466537 network_create.go:289] output of [docker network inspect default-k8s-diff-port-073048]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-073048 not found
	
	** /stderr **
	I1027 20:02:00.735222  466537 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 20:02:00.752390  466537 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-74ee89127400 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:aa:c7:29:bd:7a:a9} reservation:<nil>}
	I1027 20:02:00.752809  466537 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-9c57ca829ac8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:0a:24:08:53:d6:07} reservation:<nil>}
	I1027 20:02:00.753065  466537 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b7a7c45fd176 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f6:52:e6:cd:c8:a6} reservation:<nil>}
	I1027 20:02:00.753376  466537 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e98449378294 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:b2:53:5f:9c:fb:7f} reservation:<nil>}
	I1027 20:02:00.753812  466537 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a4a150}
	I1027 20:02:00.753837  466537 network_create.go:124] attempt to create docker network default-k8s-diff-port-073048 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1027 20:02:00.753896  466537 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-073048 default-k8s-diff-port-073048
	I1027 20:02:00.816664  466537 network_create.go:108] docker network default-k8s-diff-port-073048 192.168.85.0/24 created
	I1027 20:02:00.816698  466537 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-073048" container
	I1027 20:02:00.816797  466537 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1027 20:02:00.833151  466537 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-073048 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-073048 --label created_by.minikube.sigs.k8s.io=true
	I1027 20:02:00.851135  466537 oci.go:103] Successfully created a docker volume default-k8s-diff-port-073048
	I1027 20:02:00.851223  466537 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-073048-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-073048 --entrypoint /usr/bin/test -v default-k8s-diff-port-073048:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1027 20:02:01.453324  466537 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-073048
	I1027 20:02:01.453367  466537 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 20:02:01.453386  466537 kic.go:194] Starting extracting preloaded images to volume ...
	I1027 20:02:01.453470  466537 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-073048:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1027 20:02:06.198265  462995 pod_ready.go:104] pod "coredns-66bc5c9577-ch8jv" is not "Ready", error: <nil>
	W1027 20:02:08.199278  462995 pod_ready.go:104] pod "coredns-66bc5c9577-ch8jv" is not "Ready", error: <nil>
	I1027 20:02:05.905632  466537 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-073048:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.452119322s)
	I1027 20:02:05.905667  466537 kic.go:203] duration metric: took 4.452277226s to extract preloaded images to volume ...
	W1027 20:02:05.905824  466537 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1027 20:02:05.905933  466537 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1027 20:02:05.966810  466537 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-073048 --name default-k8s-diff-port-073048 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-073048 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-073048 --network default-k8s-diff-port-073048 --ip 192.168.85.2 --volume default-k8s-diff-port-073048:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1027 20:02:06.291608  466537 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-073048 --format={{.State.Running}}
	I1027 20:02:06.314583  466537 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-073048 --format={{.State.Status}}
	I1027 20:02:06.343605  466537 cli_runner.go:164] Run: docker exec default-k8s-diff-port-073048 stat /var/lib/dpkg/alternatives/iptables
	I1027 20:02:06.394651  466537 oci.go:144] the created container "default-k8s-diff-port-073048" has a running status.
	I1027 20:02:06.394678  466537 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21801-266035/.minikube/machines/default-k8s-diff-port-073048/id_rsa...
	I1027 20:02:07.030663  466537 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21801-266035/.minikube/machines/default-k8s-diff-port-073048/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1027 20:02:07.059290  466537 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-073048 --format={{.State.Status}}
	I1027 20:02:07.084914  466537 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1027 20:02:07.084933  466537 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-073048 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1027 20:02:07.145298  466537 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-073048 --format={{.State.Status}}
	I1027 20:02:07.170347  466537 machine.go:93] provisionDockerMachine start ...
	I1027 20:02:07.170474  466537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:02:07.200095  466537 main.go:141] libmachine: Using SSH client type: native
	I1027 20:02:07.200460  466537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1027 20:02:07.200474  466537 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 20:02:07.383137  466537 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-073048
	
	I1027 20:02:07.383203  466537 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-073048"
	I1027 20:02:07.383300  466537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:02:07.403582  466537 main.go:141] libmachine: Using SSH client type: native
	I1027 20:02:07.403919  466537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1027 20:02:07.403933  466537 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-073048 && echo "default-k8s-diff-port-073048" | sudo tee /etc/hostname
	I1027 20:02:07.571798  466537 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-073048
	
	I1027 20:02:07.571891  466537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:02:07.594442  466537 main.go:141] libmachine: Using SSH client type: native
	I1027 20:02:07.594825  466537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1027 20:02:07.594849  466537 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-073048' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-073048/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-073048' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 20:02:07.751355  466537 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 20:02:07.751379  466537 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21801-266035/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-266035/.minikube}
	I1027 20:02:07.751397  466537 ubuntu.go:190] setting up certificates
	I1027 20:02:07.751406  466537 provision.go:84] configureAuth start
	I1027 20:02:07.751466  466537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-073048
	I1027 20:02:07.769604  466537 provision.go:143] copyHostCerts
	I1027 20:02:07.769664  466537 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem, removing ...
	I1027 20:02:07.769673  466537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem
	I1027 20:02:07.769755  466537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem (1078 bytes)
	I1027 20:02:07.769860  466537 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem, removing ...
	I1027 20:02:07.769865  466537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem
	I1027 20:02:07.769891  466537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem (1123 bytes)
	I1027 20:02:07.769938  466537 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem, removing ...
	I1027 20:02:07.769943  466537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem
	I1027 20:02:07.769964  466537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem (1675 bytes)
	I1027 20:02:07.770015  466537 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-073048 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-073048 localhost minikube]
	I1027 20:02:08.051831  466537 provision.go:177] copyRemoteCerts
	I1027 20:02:08.051924  466537 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 20:02:08.051994  466537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:02:08.073631  466537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/default-k8s-diff-port-073048/id_rsa Username:docker}
	I1027 20:02:08.178858  466537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 20:02:08.199695  466537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1027 20:02:08.217983  466537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 20:02:08.237621  466537 provision.go:87] duration metric: took 486.188281ms to configureAuth
	I1027 20:02:08.237646  466537 ubuntu.go:206] setting minikube options for container-runtime
	I1027 20:02:08.237831  466537 config.go:182] Loaded profile config "default-k8s-diff-port-073048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:02:08.237935  466537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:02:08.255377  466537 main.go:141] libmachine: Using SSH client type: native
	I1027 20:02:08.255702  466537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1027 20:02:08.255721  466537 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 20:02:08.601258  466537 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 20:02:08.601282  466537 machine.go:96] duration metric: took 1.4309097s to provisionDockerMachine
	I1027 20:02:08.601293  466537 client.go:171] duration metric: took 7.898838782s to LocalClient.Create
	I1027 20:02:08.601306  466537 start.go:167] duration metric: took 7.898928479s to libmachine.API.Create "default-k8s-diff-port-073048"
	I1027 20:02:08.601313  466537 start.go:293] postStartSetup for "default-k8s-diff-port-073048" (driver="docker")
	I1027 20:02:08.601324  466537 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 20:02:08.601385  466537 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 20:02:08.601438  466537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:02:08.619078  466537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/default-k8s-diff-port-073048/id_rsa Username:docker}
	I1027 20:02:08.722875  466537 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 20:02:08.726098  466537 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 20:02:08.726137  466537 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 20:02:08.726148  466537 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-266035/.minikube/addons for local assets ...
	I1027 20:02:08.726217  466537 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-266035/.minikube/files for local assets ...
	I1027 20:02:08.726337  466537 filesync.go:149] local asset: /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem -> 2678802.pem in /etc/ssl/certs
	I1027 20:02:08.726442  466537 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 20:02:08.733702  466537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem --> /etc/ssl/certs/2678802.pem (1708 bytes)
	I1027 20:02:08.751930  466537 start.go:296] duration metric: took 150.600588ms for postStartSetup
	I1027 20:02:08.752363  466537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-073048
	I1027 20:02:08.769852  466537 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/config.json ...
	I1027 20:02:08.770252  466537 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 20:02:08.770325  466537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:02:08.786857  466537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/default-k8s-diff-port-073048/id_rsa Username:docker}
	I1027 20:02:08.888998  466537 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 20:02:08.893765  466537 start.go:128] duration metric: took 8.19497725s to createHost
	I1027 20:02:08.893790  466537 start.go:83] releasing machines lock for "default-k8s-diff-port-073048", held for 8.195107272s
	I1027 20:02:08.893871  466537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-073048
	I1027 20:02:08.910520  466537 ssh_runner.go:195] Run: cat /version.json
	I1027 20:02:08.910577  466537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:02:08.910854  466537 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 20:02:08.910921  466537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:02:08.934390  466537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/default-k8s-diff-port-073048/id_rsa Username:docker}
	I1027 20:02:08.948997  466537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/default-k8s-diff-port-073048/id_rsa Username:docker}
	I1027 20:02:09.038927  466537 ssh_runner.go:195] Run: systemctl --version
	I1027 20:02:09.130963  466537 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 20:02:09.169154  466537 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 20:02:09.173902  466537 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 20:02:09.173973  466537 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 20:02:09.205761  466537 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1027 20:02:09.205787  466537 start.go:495] detecting cgroup driver to use...
	I1027 20:02:09.205832  466537 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1027 20:02:09.205900  466537 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 20:02:09.226547  466537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 20:02:09.242610  466537 docker.go:218] disabling cri-docker service (if available) ...
	I1027 20:02:09.242733  466537 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 20:02:09.263756  466537 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 20:02:09.283357  466537 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 20:02:09.395907  466537 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 20:02:09.517086  466537 docker.go:234] disabling docker service ...
	I1027 20:02:09.517370  466537 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 20:02:09.548430  466537 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 20:02:09.562448  466537 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 20:02:09.688470  466537 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 20:02:09.820298  466537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 20:02:09.833788  466537 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 20:02:09.851533  466537 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 20:02:09.851601  466537 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:02:09.861372  466537 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 20:02:09.861442  466537 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:02:09.870830  466537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:02:09.888518  466537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:02:09.898504  466537 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 20:02:09.907024  466537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:02:09.916366  466537 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:02:09.929829  466537 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:02:09.939307  466537 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 20:02:09.947022  466537 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 20:02:09.954205  466537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:02:10.076009  466537 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 20:02:10.225576  466537 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 20:02:10.225641  466537 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 20:02:10.229644  466537 start.go:563] Will wait 60s for crictl version
	I1027 20:02:10.229705  466537 ssh_runner.go:195] Run: which crictl
	I1027 20:02:10.234131  466537 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 20:02:10.266879  466537 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 20:02:10.267045  466537 ssh_runner.go:195] Run: crio --version
	I1027 20:02:10.298361  466537 ssh_runner.go:195] Run: crio --version
	I1027 20:02:10.331588  466537 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 20:02:10.334557  466537 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-073048 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 20:02:10.350671  466537 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1027 20:02:10.354806  466537 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 20:02:10.364147  466537 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-073048 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-073048 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 20:02:10.364257  466537 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 20:02:10.364323  466537 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 20:02:10.403274  466537 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 20:02:10.403306  466537 crio.go:433] Images already preloaded, skipping extraction
	I1027 20:02:10.403362  466537 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 20:02:10.433421  466537 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 20:02:10.433445  466537 cache_images.go:85] Images are preloaded, skipping loading
	I1027 20:02:10.433453  466537 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1027 20:02:10.433544  466537 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-073048 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-073048 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 20:02:10.433637  466537 ssh_runner.go:195] Run: crio config
	I1027 20:02:10.490910  466537 cni.go:84] Creating CNI manager for ""
	I1027 20:02:10.490933  466537 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 20:02:10.490946  466537 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 20:02:10.490970  466537 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-073048 NodeName:default-k8s-diff-port-073048 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 20:02:10.491135  466537 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-073048"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 20:02:10.491207  466537 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 20:02:10.499267  466537 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 20:02:10.499392  466537 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 20:02:10.507481  466537 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1027 20:02:10.520725  466537 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 20:02:10.535521  466537 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1027 20:02:10.549926  466537 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1027 20:02:10.553386  466537 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 20:02:10.562711  466537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:02:10.673102  466537 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 20:02:10.689608  466537 certs.go:69] Setting up /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048 for IP: 192.168.85.2
	I1027 20:02:10.689680  466537 certs.go:195] generating shared ca certs ...
	I1027 20:02:10.689710  466537 certs.go:227] acquiring lock for ca certs: {Name:mk172548fc54811b51040cc0201d01eb4b3dd19b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:02:10.689894  466537 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21801-266035/.minikube/ca.key
	I1027 20:02:10.689968  466537 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.key
	I1027 20:02:10.690002  466537 certs.go:257] generating profile certs ...
	I1027 20:02:10.690078  466537 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/client.key
	I1027 20:02:10.690117  466537 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/client.crt with IP's: []
	I1027 20:02:11.037755  466537 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/client.crt ...
	I1027 20:02:11.037788  466537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/client.crt: {Name:mk3dec30b7bddf618c0aaebf4bc94cceefd537a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:02:11.038025  466537 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/client.key ...
	I1027 20:02:11.038043  466537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/client.key: {Name:mkc9b9080434b4424244398f3ae4654bdc4244e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:02:11.038140  466537 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/apiserver.key.09593244
	I1027 20:02:11.038158  466537 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/apiserver.crt.09593244 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1027 20:02:11.407036  466537 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/apiserver.crt.09593244 ...
	I1027 20:02:11.407069  466537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/apiserver.crt.09593244: {Name:mkde7f8dae578d835a9db285b2d1f3af7707bef8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:02:11.407261  466537 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/apiserver.key.09593244 ...
	I1027 20:02:11.407276  466537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/apiserver.key.09593244: {Name:mk80a1cda5ee26a996ed938d2b709b9c54cecda5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:02:11.407363  466537 certs.go:382] copying /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/apiserver.crt.09593244 -> /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/apiserver.crt
	I1027 20:02:11.407443  466537 certs.go:386] copying /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/apiserver.key.09593244 -> /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/apiserver.key
	I1027 20:02:11.407503  466537 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/proxy-client.key
	I1027 20:02:11.407520  466537 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/proxy-client.crt with IP's: []
	I1027 20:02:11.637374  466537 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/proxy-client.crt ...
	I1027 20:02:11.637404  466537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/proxy-client.crt: {Name:mke2088307e623c0e909eee79396116c9ce51be5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:02:11.637591  466537 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/proxy-client.key ...
	I1027 20:02:11.637607  466537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/proxy-client.key: {Name:mk3024b96351873255027672b0fc172270e8409f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:02:11.637805  466537 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880.pem (1338 bytes)
	W1027 20:02:11.637845  466537 certs.go:480] ignoring /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880_empty.pem, impossibly tiny 0 bytes
	I1027 20:02:11.637860  466537 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 20:02:11.637889  466537 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem (1078 bytes)
	I1027 20:02:11.637916  466537 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem (1123 bytes)
	I1027 20:02:11.637942  466537 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem (1675 bytes)
	I1027 20:02:11.637988  466537 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem (1708 bytes)
	I1027 20:02:11.638552  466537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 20:02:11.658336  466537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 20:02:11.676394  466537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 20:02:11.697927  466537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 20:02:11.717890  466537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1027 20:02:11.735620  466537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 20:02:11.754553  466537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 20:02:11.772095  466537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 20:02:11.791754  466537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem --> /usr/share/ca-certificates/2678802.pem (1708 bytes)
	I1027 20:02:11.809450  466537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 20:02:11.827479  466537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880.pem --> /usr/share/ca-certificates/267880.pem (1338 bytes)
	I1027 20:02:11.844668  466537 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 20:02:11.856866  466537 ssh_runner.go:195] Run: openssl version
	I1027 20:02:11.862951  466537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 20:02:11.871148  466537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:02:11.874691  466537 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:02:11.874777  466537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:02:11.915862  466537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 20:02:11.924270  466537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/267880.pem && ln -fs /usr/share/ca-certificates/267880.pem /etc/ssl/certs/267880.pem"
	I1027 20:02:11.932658  466537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/267880.pem
	I1027 20:02:11.936668  466537 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 19:04 /usr/share/ca-certificates/267880.pem
	I1027 20:02:11.936755  466537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/267880.pem
	I1027 20:02:11.978278  466537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/267880.pem /etc/ssl/certs/51391683.0"
	I1027 20:02:11.987429  466537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2678802.pem && ln -fs /usr/share/ca-certificates/2678802.pem /etc/ssl/certs/2678802.pem"
	I1027 20:02:11.996236  466537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2678802.pem
	I1027 20:02:12.008101  466537 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 19:04 /usr/share/ca-certificates/2678802.pem
	I1027 20:02:12.008190  466537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2678802.pem
	I1027 20:02:12.050458  466537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2678802.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 20:02:12.060145  466537 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 20:02:12.063954  466537 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 20:02:12.064025  466537 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-073048 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-073048 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 20:02:12.064102  466537 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 20:02:12.064171  466537 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 20:02:12.096232  466537 cri.go:89] found id: ""
	I1027 20:02:12.096381  466537 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 20:02:12.108262  466537 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 20:02:12.117899  466537 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1027 20:02:12.117966  466537 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 20:02:12.129180  466537 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 20:02:12.129250  466537 kubeadm.go:157] found existing configuration files:
	
	I1027 20:02:12.129332  466537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1027 20:02:12.138753  466537 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 20:02:12.138864  466537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 20:02:12.146299  466537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1027 20:02:12.154399  466537 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 20:02:12.154491  466537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 20:02:12.161964  466537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1027 20:02:12.169675  466537 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 20:02:12.169745  466537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 20:02:12.177185  466537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1027 20:02:12.184880  466537 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 20:02:12.185002  466537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 20:02:12.192280  466537 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 20:02:12.237945  466537 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1027 20:02:12.238159  466537 kubeadm.go:318] [preflight] Running pre-flight checks
	I1027 20:02:12.270120  466537 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1027 20:02:12.270222  466537 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1027 20:02:12.270264  466537 kubeadm.go:318] OS: Linux
	I1027 20:02:12.270331  466537 kubeadm.go:318] CGROUPS_CPU: enabled
	I1027 20:02:12.270399  466537 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1027 20:02:12.270469  466537 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1027 20:02:12.270536  466537 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1027 20:02:12.270602  466537 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1027 20:02:12.270687  466537 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1027 20:02:12.270752  466537 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1027 20:02:12.270819  466537 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1027 20:02:12.270885  466537 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1027 20:02:12.352619  466537 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 20:02:12.352737  466537 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 20:02:12.352844  466537 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 20:02:12.361430  466537 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1027 20:02:10.709501  462995 pod_ready.go:104] pod "coredns-66bc5c9577-ch8jv" is not "Ready", error: <nil>
	W1027 20:02:13.199005  462995 pod_ready.go:104] pod "coredns-66bc5c9577-ch8jv" is not "Ready", error: <nil>
	I1027 20:02:12.367893  466537 out.go:252]   - Generating certificates and keys ...
	I1027 20:02:12.368001  466537 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1027 20:02:12.368076  466537 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1027 20:02:12.652566  466537 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 20:02:12.910174  466537 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1027 20:02:13.616264  466537 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1027 20:02:13.820838  466537 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1027 20:02:14.605092  466537 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1027 20:02:14.605754  466537 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-073048 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1027 20:02:15.277284  466537 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1027 20:02:15.277720  466537 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-073048 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	W1027 20:02:15.204785  462995 pod_ready.go:104] pod "coredns-66bc5c9577-ch8jv" is not "Ready", error: <nil>
	W1027 20:02:17.699172  462995 pod_ready.go:104] pod "coredns-66bc5c9577-ch8jv" is not "Ready", error: <nil>
	I1027 20:02:16.086237  466537 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 20:02:16.966896  466537 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 20:02:17.464452  466537 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1027 20:02:17.464744  466537 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 20:02:17.968981  466537 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 20:02:18.440834  466537 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 20:02:18.708868  466537 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 20:02:18.980130  466537 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 20:02:19.906232  466537 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 20:02:19.906884  466537 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 20:02:19.909805  466537 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 20:02:19.913120  466537 out.go:252]   - Booting up control plane ...
	I1027 20:02:19.913221  466537 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 20:02:19.913317  466537 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 20:02:19.914262  466537 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 20:02:19.932958  466537 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 20:02:19.933081  466537 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 20:02:19.941921  466537 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 20:02:19.942251  466537 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 20:02:19.942300  466537 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1027 20:02:20.103417  466537 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 20:02:20.103542  466537 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1027 20:02:19.699316  462995 pod_ready.go:104] pod "coredns-66bc5c9577-ch8jv" is not "Ready", error: <nil>
	I1027 20:02:21.698481  462995 pod_ready.go:94] pod "coredns-66bc5c9577-ch8jv" is "Ready"
	I1027 20:02:21.698555  462995 pod_ready.go:86] duration metric: took 37.006187785s for pod "coredns-66bc5c9577-ch8jv" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:02:21.702390  462995 pod_ready.go:83] waiting for pod "etcd-embed-certs-629838" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:02:21.708240  462995 pod_ready.go:94] pod "etcd-embed-certs-629838" is "Ready"
	I1027 20:02:21.708313  462995 pod_ready.go:86] duration metric: took 5.847869ms for pod "etcd-embed-certs-629838" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:02:21.712595  462995 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-629838" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:02:21.716381  462995 pod_ready.go:94] pod "kube-apiserver-embed-certs-629838" is "Ready"
	I1027 20:02:21.716401  462995 pod_ready.go:86] duration metric: took 3.786003ms for pod "kube-apiserver-embed-certs-629838" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:02:21.718384  462995 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-629838" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:02:21.896217  462995 pod_ready.go:94] pod "kube-controller-manager-embed-certs-629838" is "Ready"
	I1027 20:02:21.896286  462995 pod_ready.go:86] duration metric: took 177.884298ms for pod "kube-controller-manager-embed-certs-629838" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:02:22.100995  462995 pod_ready.go:83] waiting for pod "kube-proxy-bwql6" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:02:22.497174  462995 pod_ready.go:94] pod "kube-proxy-bwql6" is "Ready"
	I1027 20:02:22.497250  462995 pod_ready.go:86] duration metric: took 396.172576ms for pod "kube-proxy-bwql6" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:02:22.696148  462995 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-629838" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:02:23.100997  462995 pod_ready.go:94] pod "kube-scheduler-embed-certs-629838" is "Ready"
	I1027 20:02:23.101072  462995 pod_ready.go:86] duration metric: took 404.852145ms for pod "kube-scheduler-embed-certs-629838" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:02:23.101101  462995 pod_ready.go:40] duration metric: took 38.413556108s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 20:02:23.215007  462995 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 20:02:23.218010  462995 out.go:179] * Done! kubectl is now configured to use "embed-certs-629838" cluster and "default" namespace by default
	I1027 20:02:21.109506  466537 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.002230236s
	I1027 20:02:21.109646  466537 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 20:02:21.109746  466537 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1027 20:02:21.109849  466537 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 20:02:21.109956  466537 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 20:02:25.714464  466537 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.605095298s
	I1027 20:02:25.983914  466537 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.874849435s
	I1027 20:02:27.611230  466537 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.50209575s
	I1027 20:02:27.631954  466537 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 20:02:27.648446  466537 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 20:02:27.663147  466537 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 20:02:27.663384  466537 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-073048 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 20:02:27.681225  466537 kubeadm.go:318] [bootstrap-token] Using token: xsum8b.ramxdmytyu4idcni
	I1027 20:02:27.684323  466537 out.go:252]   - Configuring RBAC rules ...
	I1027 20:02:27.684453  466537 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 20:02:27.689088  466537 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 20:02:27.698096  466537 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 20:02:27.702404  466537 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 20:02:27.710374  466537 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 20:02:27.715128  466537 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 20:02:28.025333  466537 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 20:02:28.450941  466537 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1027 20:02:29.018078  466537 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1027 20:02:29.019750  466537 kubeadm.go:318] 
	I1027 20:02:29.019827  466537 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1027 20:02:29.019841  466537 kubeadm.go:318] 
	I1027 20:02:29.019923  466537 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1027 20:02:29.019931  466537 kubeadm.go:318] 
	I1027 20:02:29.019958  466537 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1027 20:02:29.020024  466537 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 20:02:29.020085  466537 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 20:02:29.020092  466537 kubeadm.go:318] 
	I1027 20:02:29.020148  466537 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1027 20:02:29.020156  466537 kubeadm.go:318] 
	I1027 20:02:29.020206  466537 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 20:02:29.020219  466537 kubeadm.go:318] 
	I1027 20:02:29.020274  466537 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1027 20:02:29.020355  466537 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 20:02:29.020430  466537 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 20:02:29.020439  466537 kubeadm.go:318] 
	I1027 20:02:29.020526  466537 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 20:02:29.020611  466537 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1027 20:02:29.020619  466537 kubeadm.go:318] 
	I1027 20:02:29.020706  466537 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token xsum8b.ramxdmytyu4idcni \
	I1027 20:02:29.020816  466537 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:9473f077605f6866eb08c8a712af7c56a0deb8dc2a907ec8cbb4b8ae396e8f1f \
	I1027 20:02:29.020840  466537 kubeadm.go:318] 	--control-plane 
	I1027 20:02:29.020851  466537 kubeadm.go:318] 
	I1027 20:02:29.020941  466537 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1027 20:02:29.020949  466537 kubeadm.go:318] 
	I1027 20:02:29.021034  466537 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token xsum8b.ramxdmytyu4idcni \
	I1027 20:02:29.021145  466537 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:9473f077605f6866eb08c8a712af7c56a0deb8dc2a907ec8cbb4b8ae396e8f1f 
	I1027 20:02:29.025629  466537 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1027 20:02:29.025865  466537 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1027 20:02:29.025979  466537 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 20:02:29.026017  466537 cni.go:84] Creating CNI manager for ""
	I1027 20:02:29.026029  466537 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 20:02:29.029252  466537 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1027 20:02:29.032290  466537 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1027 20:02:29.036632  466537 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 20:02:29.036655  466537 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1027 20:02:29.050047  466537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 20:02:29.361429  466537 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 20:02:29.361563  466537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:02:29.361663  466537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-073048 minikube.k8s.io/updated_at=2025_10_27T20_02_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f minikube.k8s.io/name=default-k8s-diff-port-073048 minikube.k8s.io/primary=true
	I1027 20:02:29.515886  466537 ops.go:34] apiserver oom_adj: -16
	I1027 20:02:29.515905  466537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:02:30.019982  466537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:02:30.516367  466537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:02:31.016003  466537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:02:31.516711  466537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:02:32.016020  466537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:02:32.515937  466537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:02:33.016021  466537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:02:33.516205  466537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:02:33.620099  466537 kubeadm.go:1113] duration metric: took 4.258578635s to wait for elevateKubeSystemPrivileges
	I1027 20:02:33.620138  466537 kubeadm.go:402] duration metric: took 21.556116366s to StartCluster
	I1027 20:02:33.620156  466537 settings.go:142] acquiring lock: {Name:mk49b9e2e9b7901c6b51e89b8b1e8ee7f9e88107 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:02:33.620229  466537 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 20:02:33.621800  466537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/kubeconfig: {Name:mk0a03a594f8d9f9199b1c7cff7c550cec8414c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:02:33.622048  466537 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 20:02:33.622060  466537 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 20:02:33.622318  466537 config.go:182] Loaded profile config "default-k8s-diff-port-073048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:02:33.622365  466537 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 20:02:33.622446  466537 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-073048"
	I1027 20:02:33.622469  466537 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-073048"
	I1027 20:02:33.622498  466537 host.go:66] Checking if "default-k8s-diff-port-073048" exists ...
	I1027 20:02:33.622978  466537 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-073048 --format={{.State.Status}}
	I1027 20:02:33.623393  466537 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-073048"
	I1027 20:02:33.623412  466537 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-073048"
	I1027 20:02:33.623693  466537 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-073048 --format={{.State.Status}}
	I1027 20:02:33.627092  466537 out.go:179] * Verifying Kubernetes components...
	I1027 20:02:33.637147  466537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:02:33.673940  466537 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 20:02:33.675252  466537 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-073048"
	I1027 20:02:33.675295  466537 host.go:66] Checking if "default-k8s-diff-port-073048" exists ...
	I1027 20:02:33.675764  466537 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-073048 --format={{.State.Status}}
	I1027 20:02:33.676861  466537 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 20:02:33.676886  466537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 20:02:33.676937  466537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:02:33.740984  466537 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 20:02:33.741014  466537 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 20:02:33.741088  466537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:02:33.757260  466537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/default-k8s-diff-port-073048/id_rsa Username:docker}
	I1027 20:02:33.776639  466537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/default-k8s-diff-port-073048/id_rsa Username:docker}
	I1027 20:02:33.995598  466537 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 20:02:33.995724  466537 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 20:02:34.050633  466537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 20:02:34.111720  466537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 20:02:34.564110  466537 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-073048" to be "Ready" ...
	I1027 20:02:34.564487  466537 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1027 20:02:34.995088  466537 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1027 20:02:35.007861  466537 addons.go:514] duration metric: took 1.385460103s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1027 20:02:35.069913  466537 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-073048" context rescaled to 1 replicas
	
	
	==> CRI-O <==
	Oct 27 20:02:15 embed-certs-629838 crio[648]: time="2025-10-27T20:02:15.017995605Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:02:15 embed-certs-629838 crio[648]: time="2025-10-27T20:02:15.038041616Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:02:15 embed-certs-629838 crio[648]: time="2025-10-27T20:02:15.038830632Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:02:15 embed-certs-629838 crio[648]: time="2025-10-27T20:02:15.067868202Z" level=info msg="Created container 05a28f65e7306f32e6fe44d10f1eb7513a2f00be81d35bf861d871965164280c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ddw8x/dashboard-metrics-scraper" id=33ad5ee1-89e2-43cd-ab58-33d0882491ca name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 20:02:15 embed-certs-629838 crio[648]: time="2025-10-27T20:02:15.072471741Z" level=info msg="Starting container: 05a28f65e7306f32e6fe44d10f1eb7513a2f00be81d35bf861d871965164280c" id=e430a4ca-f8b1-4971-8063-565aab7f51d2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 20:02:15 embed-certs-629838 crio[648]: time="2025-10-27T20:02:15.080133171Z" level=info msg="Started container" PID=1638 containerID=05a28f65e7306f32e6fe44d10f1eb7513a2f00be81d35bf861d871965164280c description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ddw8x/dashboard-metrics-scraper id=e430a4ca-f8b1-4971-8063-565aab7f51d2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=101807dc653a62bf806f9e8df5e5c4ce035d4ffa3f3b030a4bee4cd940cd1e3b
	Oct 27 20:02:15 embed-certs-629838 conmon[1636]: conmon 05a28f65e7306f32e6fe <ninfo>: container 1638 exited with status 1
	Oct 27 20:02:15 embed-certs-629838 crio[648]: time="2025-10-27T20:02:15.252110655Z" level=info msg="Removing container: 8c895b40bc07fab1aa6283121a31d2bbc6b09d35a5b8c241b7d9592dba32c1fe" id=c5913882-fe40-4aee-a6fc-b4ec1ff4cf8a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 20:02:15 embed-certs-629838 crio[648]: time="2025-10-27T20:02:15.265041198Z" level=info msg="Error loading conmon cgroup of container 8c895b40bc07fab1aa6283121a31d2bbc6b09d35a5b8c241b7d9592dba32c1fe: cgroup deleted" id=c5913882-fe40-4aee-a6fc-b4ec1ff4cf8a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 20:02:15 embed-certs-629838 crio[648]: time="2025-10-27T20:02:15.272668078Z" level=info msg="Removed container 8c895b40bc07fab1aa6283121a31d2bbc6b09d35a5b8c241b7d9592dba32c1fe: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ddw8x/dashboard-metrics-scraper" id=c5913882-fe40-4aee-a6fc-b4ec1ff4cf8a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 20:02:23 embed-certs-629838 crio[648]: time="2025-10-27T20:02:23.824429493Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 20:02:23 embed-certs-629838 crio[648]: time="2025-10-27T20:02:23.833500098Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 20:02:23 embed-certs-629838 crio[648]: time="2025-10-27T20:02:23.833691026Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 20:02:23 embed-certs-629838 crio[648]: time="2025-10-27T20:02:23.833836655Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 20:02:23 embed-certs-629838 crio[648]: time="2025-10-27T20:02:23.846282723Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 20:02:23 embed-certs-629838 crio[648]: time="2025-10-27T20:02:23.846515734Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 20:02:23 embed-certs-629838 crio[648]: time="2025-10-27T20:02:23.846631964Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 20:02:23 embed-certs-629838 crio[648]: time="2025-10-27T20:02:23.855657697Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 20:02:23 embed-certs-629838 crio[648]: time="2025-10-27T20:02:23.855862729Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 20:02:23 embed-certs-629838 crio[648]: time="2025-10-27T20:02:23.855957307Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 20:02:23 embed-certs-629838 crio[648]: time="2025-10-27T20:02:23.862164231Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 20:02:23 embed-certs-629838 crio[648]: time="2025-10-27T20:02:23.86237316Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 20:02:23 embed-certs-629838 crio[648]: time="2025-10-27T20:02:23.862477453Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 20:02:23 embed-certs-629838 crio[648]: time="2025-10-27T20:02:23.871801384Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 20:02:23 embed-certs-629838 crio[648]: time="2025-10-27T20:02:23.871973056Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	05a28f65e7306       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago       Exited              dashboard-metrics-scraper   2                   101807dc653a6       dashboard-metrics-scraper-6ffb444bf9-ddw8x   kubernetes-dashboard
	e2942edb63853       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   7b41b223ce06c       storage-provisioner                          kube-system
	098bd45d59457       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   39 seconds ago       Running             kubernetes-dashboard        0                   3d6d5b98e1974       kubernetes-dashboard-855c9754f9-zplzg        kubernetes-dashboard
	b64abd248b5af       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   8cbe3951908ef       busybox                                      default
	36598828b97e6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           55 seconds ago       Running             coredns                     1                   6defc14652a78       coredns-66bc5c9577-ch8jv                     kube-system
	82adb2c58510b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   7b41b223ce06c       storage-provisioner                          kube-system
	fff55bbbe9a89       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           55 seconds ago       Running             kube-proxy                  1                   0a3faf612e1b2       kube-proxy-bwql6                             kube-system
	1ec2027ec7db3       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           55 seconds ago       Running             kindnet-cni                 1                   5b8d10ca9acfa       kindnet-cfqpk                                kube-system
	62e147a7d6f68       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   ba99f7ad335af       kube-scheduler-embed-certs-629838            kube-system
	d4ab5323a8b08       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   f25cb03f02ff1       kube-controller-manager-embed-certs-629838   kube-system
	b939b6634b4d0       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   bd06cd33f97af       etcd-embed-certs-629838                      kube-system
	0ad8adca28e83       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   6d9a9e16144a8       kube-apiserver-embed-certs-629838            kube-system
	
	
	==> coredns [36598828b97e660c2e2764dd87dc4bc9566a206293908f2e298358a5c6ba4a21] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40424 - 10862 "HINFO IN 1449722674201347895.966792375961270052. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.021967729s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-629838
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-629838
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=embed-certs-629838
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T20_00_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 20:00:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-629838
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 20:02:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 20:02:13 +0000   Mon, 27 Oct 2025 20:00:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 20:02:13 +0000   Mon, 27 Oct 2025 20:00:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 20:02:13 +0000   Mon, 27 Oct 2025 20:00:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 20:02:13 +0000   Mon, 27 Oct 2025 20:01:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-629838
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                6cfa2846-7c31-4e89-9dcc-f2fbb567f43d
	  Boot ID:                    23ea05b4-8203-4fb9-a84a-5deb4b091cbb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-ch8jv                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m17s
	  kube-system                 etcd-embed-certs-629838                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m22s
	  kube-system                 kindnet-cfqpk                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m18s
	  kube-system                 kube-apiserver-embed-certs-629838             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-controller-manager-embed-certs-629838    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-proxy-bwql6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-scheduler-embed-certs-629838             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-ddw8x    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-zplzg         0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m15s                  kube-proxy       
	  Normal   Starting                 54s                    kube-proxy       
	  Normal   Starting                 2m30s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m30s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m30s (x8 over 2m30s)  kubelet          Node embed-certs-629838 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m30s (x8 over 2m30s)  kubelet          Node embed-certs-629838 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m30s (x8 over 2m30s)  kubelet          Node embed-certs-629838 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m22s                  kubelet          Node embed-certs-629838 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m22s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m22s                  kubelet          Node embed-certs-629838 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m22s                  kubelet          Node embed-certs-629838 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m22s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m18s                  node-controller  Node embed-certs-629838 event: Registered Node embed-certs-629838 in Controller
	  Normal   NodeReady                96s                    kubelet          Node embed-certs-629838 status is now: NodeReady
	  Normal   Starting                 63s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s (x8 over 63s)      kubelet          Node embed-certs-629838 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x8 over 63s)      kubelet          Node embed-certs-629838 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x8 over 63s)      kubelet          Node embed-certs-629838 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           54s                    node-controller  Node embed-certs-629838 event: Registered Node embed-certs-629838 in Controller
	
	
	==> dmesg <==
	[Oct27 19:38] overlayfs: idmapped layers are currently not supported
	[Oct27 19:39] overlayfs: idmapped layers are currently not supported
	[Oct27 19:40] overlayfs: idmapped layers are currently not supported
	[  +7.015891] overlayfs: idmapped layers are currently not supported
	[Oct27 19:41] overlayfs: idmapped layers are currently not supported
	[ +28.455216] overlayfs: idmapped layers are currently not supported
	[Oct27 19:42] overlayfs: idmapped layers are currently not supported
	[ +26.490174] overlayfs: idmapped layers are currently not supported
	[Oct27 19:44] overlayfs: idmapped layers are currently not supported
	[Oct27 19:45] overlayfs: idmapped layers are currently not supported
	[Oct27 19:47] overlayfs: idmapped layers are currently not supported
	[Oct27 19:49] overlayfs: idmapped layers are currently not supported
	[ +31.410335] overlayfs: idmapped layers are currently not supported
	[Oct27 19:51] overlayfs: idmapped layers are currently not supported
	[Oct27 19:53] overlayfs: idmapped layers are currently not supported
	[Oct27 19:54] overlayfs: idmapped layers are currently not supported
	[Oct27 19:55] overlayfs: idmapped layers are currently not supported
	[Oct27 19:56] overlayfs: idmapped layers are currently not supported
	[Oct27 19:57] overlayfs: idmapped layers are currently not supported
	[Oct27 19:58] overlayfs: idmapped layers are currently not supported
	[Oct27 19:59] overlayfs: idmapped layers are currently not supported
	[Oct27 20:00] overlayfs: idmapped layers are currently not supported
	[ +41.321877] overlayfs: idmapped layers are currently not supported
	[Oct27 20:01] overlayfs: idmapped layers are currently not supported
	[Oct27 20:02] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b939b6634b4d016b4989e6e47aa2060602b0e9582bcbe6e92ed906e4e1c2d5b5] <==
	{"level":"warn","ts":"2025-10-27T20:01:40.301779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.340613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.379137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.415449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.430536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.459333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.492681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.517181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.563565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.620332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.639505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.653997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.669508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.693957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.732973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.767406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.788467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.835862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.837560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.861031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.880382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.914269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.939692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.975945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:41.067820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50798","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:02:39 up  2:45,  0 user,  load average: 2.73, 2.96, 2.63
	Linux embed-certs-629838 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1ec2027ec7db30a16a7dcd76b4cab74cf0f0fee27a8a8159c50979c610bb6883] <==
	I1027 20:01:43.645211       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 20:01:43.645831       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1027 20:01:43.646078       1 main.go:148] setting mtu 1500 for CNI 
	I1027 20:01:43.646120       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 20:01:43.646182       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T20:01:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 20:01:43.824551       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 20:01:43.824620       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 20:01:43.824654       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 20:01:43.825337       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1027 20:02:13.825541       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1027 20:02:13.825663       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1027 20:02:13.825738       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1027 20:02:13.825813       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1027 20:02:15.425152       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 20:02:15.425212       1 metrics.go:72] Registering metrics
	I1027 20:02:15.425292       1 controller.go:711] "Syncing nftables rules"
	I1027 20:02:23.824159       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1027 20:02:23.824209       1 main.go:301] handling current node
	I1027 20:02:33.831095       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1027 20:02:33.831202       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0ad8adca28e83fc64435c7179ec7d5ad6ddbf98b75047dd679977005035865ac] <==
	I1027 20:01:42.545981       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 20:01:42.590388       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1027 20:01:42.590529       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1027 20:01:42.591708       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1027 20:01:42.597902       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1027 20:01:42.597973       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1027 20:01:42.601140       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1027 20:01:42.601400       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1027 20:01:42.601417       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1027 20:01:42.602030       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 20:01:42.602064       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1027 20:01:42.614579       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1027 20:01:42.616688       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 20:01:42.636413       1 cache.go:39] Caches are synced for autoregister controller
	I1027 20:01:43.026630       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 20:01:43.105832       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 20:01:43.382051       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 20:01:43.538424       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 20:01:43.628477       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 20:01:43.700433       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 20:01:44.056431       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.239.19"}
	I1027 20:01:44.122398       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.145.67"}
	I1027 20:01:46.301883       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 20:01:46.352954       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 20:01:46.519191       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [d4ab5323a8b08831eb177e903d21bbe79ed018f63e07293ef7e383fc941ad31f] <==
	I1027 20:01:45.909597       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1027 20:01:45.909658       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 20:01:45.912906       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1027 20:01:45.912966       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1027 20:01:45.914098       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1027 20:01:45.914114       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1027 20:01:45.914124       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1027 20:01:45.915219       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1027 20:01:45.917465       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1027 20:01:45.918702       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 20:01:45.921913       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1027 20:01:45.923552       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1027 20:01:45.925019       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 20:01:45.928270       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 20:01:45.930479       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1027 20:01:45.932788       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 20:01:45.940700       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 20:01:45.945846       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 20:01:45.946468       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1027 20:01:45.946663       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1027 20:01:45.946834       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1027 20:01:45.948992       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 20:01:45.949318       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 20:01:45.953902       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 20:01:45.964708       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	
	
	==> kube-proxy [fff55bbbe9a89678e38123d75f96796776f16fde5fd525adf749545f93e256a0] <==
	I1027 20:01:44.065723       1 server_linux.go:53] "Using iptables proxy"
	I1027 20:01:44.285895       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 20:01:44.392470       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 20:01:44.394111       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1027 20:01:44.394262       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 20:01:44.425543       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 20:01:44.425603       1 server_linux.go:132] "Using iptables Proxier"
	I1027 20:01:44.432720       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 20:01:44.433212       1 server.go:527] "Version info" version="v1.34.1"
	I1027 20:01:44.433447       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 20:01:44.434682       1 config.go:200] "Starting service config controller"
	I1027 20:01:44.434906       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 20:01:44.434976       1 config.go:106] "Starting endpoint slice config controller"
	I1027 20:01:44.435093       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 20:01:44.435133       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 20:01:44.435161       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 20:01:44.435822       1 config.go:309] "Starting node config controller"
	I1027 20:01:44.435891       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 20:01:44.435923       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 20:01:44.535756       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 20:01:44.535861       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 20:01:44.535886       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [62e147a7d6f68f8e6cb774b163bbf47eeada86a4f24470d5ec3b80cc23844557] <==
	I1027 20:01:40.602866       1 serving.go:386] Generated self-signed cert in-memory
	I1027 20:01:44.092976       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 20:01:44.093002       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 20:01:44.105970       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1027 20:01:44.106159       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1027 20:01:44.106243       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 20:01:44.106275       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 20:01:44.106341       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 20:01:44.106370       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 20:01:44.107932       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 20:01:44.108010       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 20:01:44.206862       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 20:01:44.207025       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1027 20:01:44.207174       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 20:01:46 embed-certs-629838 kubelet[774]: I1027 20:01:46.536422     774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f64c87eb-43dd-4b88-b7e7-32467fb2e83d-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-ddw8x\" (UID: \"f64c87eb-43dd-4b88-b7e7-32467fb2e83d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ddw8x"
	Oct 27 20:01:46 embed-certs-629838 kubelet[774]: W1027 20:01:46.830665     774 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c4f57eb9d97cb78db15678f52105eae7c06964f89c91a987456d1c3d7cf90fa0/crio-101807dc653a62bf806f9e8df5e5c4ce035d4ffa3f3b030a4bee4cd940cd1e3b WatchSource:0}: Error finding container 101807dc653a62bf806f9e8df5e5c4ce035d4ffa3f3b030a4bee4cd940cd1e3b: Status 404 returned error can't find the container with id 101807dc653a62bf806f9e8df5e5c4ce035d4ffa3f3b030a4bee4cd940cd1e3b
	Oct 27 20:01:46 embed-certs-629838 kubelet[774]: W1027 20:01:46.847229     774 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c4f57eb9d97cb78db15678f52105eae7c06964f89c91a987456d1c3d7cf90fa0/crio-3d6d5b98e197469779a897c8b16efc5624c2c2f39e8b6d20bce4ae5c95bfae38 WatchSource:0}: Error finding container 3d6d5b98e197469779a897c8b16efc5624c2c2f39e8b6d20bce4ae5c95bfae38: Status 404 returned error can't find the container with id 3d6d5b98e197469779a897c8b16efc5624c2c2f39e8b6d20bce4ae5c95bfae38
	Oct 27 20:01:51 embed-certs-629838 kubelet[774]: I1027 20:01:51.535351     774 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 27 20:01:53 embed-certs-629838 kubelet[774]: I1027 20:01:53.126673     774 scope.go:117] "RemoveContainer" containerID="51093c6b50d7ba6eaef141798fa97b57e350b938feeb3059bc670a50e0635c33"
	Oct 27 20:01:54 embed-certs-629838 kubelet[774]: I1027 20:01:54.175907     774 scope.go:117] "RemoveContainer" containerID="8c895b40bc07fab1aa6283121a31d2bbc6b09d35a5b8c241b7d9592dba32c1fe"
	Oct 27 20:01:54 embed-certs-629838 kubelet[774]: E1027 20:01:54.176105     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ddw8x_kubernetes-dashboard(f64c87eb-43dd-4b88-b7e7-32467fb2e83d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ddw8x" podUID="f64c87eb-43dd-4b88-b7e7-32467fb2e83d"
	Oct 27 20:01:54 embed-certs-629838 kubelet[774]: I1027 20:01:54.179240     774 scope.go:117] "RemoveContainer" containerID="51093c6b50d7ba6eaef141798fa97b57e350b938feeb3059bc670a50e0635c33"
	Oct 27 20:01:55 embed-certs-629838 kubelet[774]: I1027 20:01:55.182893     774 scope.go:117] "RemoveContainer" containerID="8c895b40bc07fab1aa6283121a31d2bbc6b09d35a5b8c241b7d9592dba32c1fe"
	Oct 27 20:01:55 embed-certs-629838 kubelet[774]: E1027 20:01:55.188004     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ddw8x_kubernetes-dashboard(f64c87eb-43dd-4b88-b7e7-32467fb2e83d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ddw8x" podUID="f64c87eb-43dd-4b88-b7e7-32467fb2e83d"
	Oct 27 20:01:59 embed-certs-629838 kubelet[774]: I1027 20:01:59.856497     774 scope.go:117] "RemoveContainer" containerID="8c895b40bc07fab1aa6283121a31d2bbc6b09d35a5b8c241b7d9592dba32c1fe"
	Oct 27 20:01:59 embed-certs-629838 kubelet[774]: E1027 20:01:59.856679     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ddw8x_kubernetes-dashboard(f64c87eb-43dd-4b88-b7e7-32467fb2e83d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ddw8x" podUID="f64c87eb-43dd-4b88-b7e7-32467fb2e83d"
	Oct 27 20:02:14 embed-certs-629838 kubelet[774]: I1027 20:02:14.243680     774 scope.go:117] "RemoveContainer" containerID="82adb2c58510baa0e605b645b22fc5bef69cd39b85ce04adde1bd01d6a2127b3"
	Oct 27 20:02:14 embed-certs-629838 kubelet[774]: I1027 20:02:14.275721     774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zplzg" podStartSLOduration=16.043386189 podStartE2EDuration="28.275693921s" podCreationTimestamp="2025-10-27 20:01:46 +0000 UTC" firstStartedPulling="2025-10-27 20:01:46.851470165 +0000 UTC m=+10.126249646" lastFinishedPulling="2025-10-27 20:01:59.083777896 +0000 UTC m=+22.358557378" observedRunningTime="2025-10-27 20:01:59.224116346 +0000 UTC m=+22.498895877" watchObservedRunningTime="2025-10-27 20:02:14.275693921 +0000 UTC m=+37.550473411"
	Oct 27 20:02:15 embed-certs-629838 kubelet[774]: I1027 20:02:15.002538     774 scope.go:117] "RemoveContainer" containerID="8c895b40bc07fab1aa6283121a31d2bbc6b09d35a5b8c241b7d9592dba32c1fe"
	Oct 27 20:02:15 embed-certs-629838 kubelet[774]: I1027 20:02:15.247988     774 scope.go:117] "RemoveContainer" containerID="8c895b40bc07fab1aa6283121a31d2bbc6b09d35a5b8c241b7d9592dba32c1fe"
	Oct 27 20:02:15 embed-certs-629838 kubelet[774]: I1027 20:02:15.248270     774 scope.go:117] "RemoveContainer" containerID="05a28f65e7306f32e6fe44d10f1eb7513a2f00be81d35bf861d871965164280c"
	Oct 27 20:02:15 embed-certs-629838 kubelet[774]: E1027 20:02:15.248508     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ddw8x_kubernetes-dashboard(f64c87eb-43dd-4b88-b7e7-32467fb2e83d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ddw8x" podUID="f64c87eb-43dd-4b88-b7e7-32467fb2e83d"
	Oct 27 20:02:19 embed-certs-629838 kubelet[774]: I1027 20:02:19.856535     774 scope.go:117] "RemoveContainer" containerID="05a28f65e7306f32e6fe44d10f1eb7513a2f00be81d35bf861d871965164280c"
	Oct 27 20:02:19 embed-certs-629838 kubelet[774]: E1027 20:02:19.857194     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ddw8x_kubernetes-dashboard(f64c87eb-43dd-4b88-b7e7-32467fb2e83d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ddw8x" podUID="f64c87eb-43dd-4b88-b7e7-32467fb2e83d"
	Oct 27 20:02:35 embed-certs-629838 kubelet[774]: I1027 20:02:35.003210     774 scope.go:117] "RemoveContainer" containerID="05a28f65e7306f32e6fe44d10f1eb7513a2f00be81d35bf861d871965164280c"
	Oct 27 20:02:35 embed-certs-629838 kubelet[774]: E1027 20:02:35.004288     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ddw8x_kubernetes-dashboard(f64c87eb-43dd-4b88-b7e7-32467fb2e83d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ddw8x" podUID="f64c87eb-43dd-4b88-b7e7-32467fb2e83d"
	Oct 27 20:02:35 embed-certs-629838 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 20:02:35 embed-certs-629838 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 20:02:35 embed-certs-629838 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [098bd45d59457750f47cfad92e238708c4767cf897a1ae3b97f78333c2fee810] <==
	2025/10/27 20:01:59 Starting overwatch
	2025/10/27 20:01:59 Using namespace: kubernetes-dashboard
	2025/10/27 20:01:59 Using in-cluster config to connect to apiserver
	2025/10/27 20:01:59 Using secret token for csrf signing
	2025/10/27 20:01:59 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/27 20:01:59 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/27 20:01:59 Successful initial request to the apiserver, version: v1.34.1
	2025/10/27 20:01:59 Generating JWE encryption key
	2025/10/27 20:01:59 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/27 20:01:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/27 20:01:59 Initializing JWE encryption key from synchronized object
	2025/10/27 20:01:59 Creating in-cluster Sidecar client
	2025/10/27 20:01:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 20:01:59 Serving insecurely on HTTP port: 9090
	2025/10/27 20:02:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [82adb2c58510baa0e605b645b22fc5bef69cd39b85ce04adde1bd01d6a2127b3] <==
	I1027 20:01:43.944628       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1027 20:02:13.946432       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e2942edb63853d027badc94c1e3181d04390c3297cb5ee2e519fc254c1c790eb] <==
	I1027 20:02:14.315038       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1027 20:02:14.338950       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 20:02:14.339081       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1027 20:02:14.341796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:02:17.797186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:02:22.057327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:02:25.655911       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:02:28.709884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:02:31.732100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:02:31.738960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 20:02:31.739242       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 20:02:31.739426       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-629838_b895aab8-e8a0-4c22-a642-6e2cb0a2690c!
	I1027 20:02:31.739482       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2ebd0f97-21e2-431a-a333-48d0485c417f", APIVersion:"v1", ResourceVersion:"684", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-629838_b895aab8-e8a0-4c22-a642-6e2cb0a2690c became leader
	W1027 20:02:31.745986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:02:31.749566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 20:02:31.839819       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-629838_b895aab8-e8a0-4c22-a642-6e2cb0a2690c!
	W1027 20:02:33.758967       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:02:33.771831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:02:35.775213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:02:35.781445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:02:37.791456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:02:37.798734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-629838 -n embed-certs-629838
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-629838 -n embed-certs-629838: exit status 2 (416.746981ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-629838 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-629838
helpers_test.go:243: (dbg) docker inspect embed-certs-629838:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c4f57eb9d97cb78db15678f52105eae7c06964f89c91a987456d1c3d7cf90fa0",
	        "Created": "2025-10-27T19:59:47.181587162Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 463126,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T20:01:29.541702446Z",
	            "FinishedAt": "2025-10-27T20:01:28.739237511Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/c4f57eb9d97cb78db15678f52105eae7c06964f89c91a987456d1c3d7cf90fa0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c4f57eb9d97cb78db15678f52105eae7c06964f89c91a987456d1c3d7cf90fa0/hostname",
	        "HostsPath": "/var/lib/docker/containers/c4f57eb9d97cb78db15678f52105eae7c06964f89c91a987456d1c3d7cf90fa0/hosts",
	        "LogPath": "/var/lib/docker/containers/c4f57eb9d97cb78db15678f52105eae7c06964f89c91a987456d1c3d7cf90fa0/c4f57eb9d97cb78db15678f52105eae7c06964f89c91a987456d1c3d7cf90fa0-json.log",
	        "Name": "/embed-certs-629838",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-629838:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-629838",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c4f57eb9d97cb78db15678f52105eae7c06964f89c91a987456d1c3d7cf90fa0",
	                "LowerDir": "/var/lib/docker/overlay2/11e03d0ec6d0fa1ee685eb208aa9a39493e58b461a663b0cbbf1e2b57f205fd2-init/diff:/var/lib/docker/overlay2/0567218bbe05e8b59b2aef7ad82032d6716040c1f9d9d73e7de8682101079f76/diff",
	                "MergedDir": "/var/lib/docker/overlay2/11e03d0ec6d0fa1ee685eb208aa9a39493e58b461a663b0cbbf1e2b57f205fd2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/11e03d0ec6d0fa1ee685eb208aa9a39493e58b461a663b0cbbf1e2b57f205fd2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/11e03d0ec6d0fa1ee685eb208aa9a39493e58b461a663b0cbbf1e2b57f205fd2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-629838",
	                "Source": "/var/lib/docker/volumes/embed-certs-629838/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-629838",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-629838",
	                "name.minikube.sigs.k8s.io": "embed-certs-629838",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3a3d568ff1480da30ecdc631ad9f93a95fff682a65b5eea03cd18b9069e202ae",
	            "SandboxKey": "/var/run/docker/netns/3a3d568ff148",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-629838": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:e6:84:10:19:ce",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e984493782940b4e21ac1d18681d3b8ebbf5771aadf9508ab04a1597fbf530b4",
	                    "EndpointID": "aacd75f77c1b7dfdc9a07170b1fdc23f711004e769f697cef656a1f9107cde74",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-629838",
	                        "c4f57eb9d97c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-629838 -n embed-certs-629838
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-629838 -n embed-certs-629838: exit status 2 (360.030342ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-629838 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-629838 logs -n 25: (1.283716087s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-942644 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-942644       │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │ 27 Oct 25 19:59 UTC │
	│ start   │ -p cert-expiration-280013 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-280013       │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │ 27 Oct 25 19:58 UTC │
	│ delete  │ -p cert-expiration-280013                                                                                                                                                                                                                     │ cert-expiration-280013       │ jenkins │ v1.37.0 │ 27 Oct 25 19:58 UTC │ 27 Oct 25 19:59 UTC │
	│ start   │ -p no-preload-300878 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │ 27 Oct 25 20:00 UTC │
	│ image   │ old-k8s-version-942644 image list --format=json                                                                                                                                                                                               │ old-k8s-version-942644       │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │ 27 Oct 25 19:59 UTC │
	│ pause   │ -p old-k8s-version-942644 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-942644       │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │                     │
	│ delete  │ -p old-k8s-version-942644                                                                                                                                                                                                                     │ old-k8s-version-942644       │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │ 27 Oct 25 19:59 UTC │
	│ delete  │ -p old-k8s-version-942644                                                                                                                                                                                                                     │ old-k8s-version-942644       │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │ 27 Oct 25 19:59 UTC │
	│ start   │ -p embed-certs-629838 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │ 27 Oct 25 20:01 UTC │
	│ addons  │ enable metrics-server -p no-preload-300878 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:00 UTC │                     │
	│ stop    │ -p no-preload-300878 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:00 UTC │ 27 Oct 25 20:00 UTC │
	│ addons  │ enable dashboard -p no-preload-300878 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:00 UTC │ 27 Oct 25 20:00 UTC │
	│ start   │ -p no-preload-300878 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:00 UTC │ 27 Oct 25 20:01 UTC │
	│ addons  │ enable metrics-server -p embed-certs-629838 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │                     │
	│ stop    │ -p embed-certs-629838 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:01 UTC │
	│ addons  │ enable dashboard -p embed-certs-629838 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:01 UTC │
	│ start   │ -p embed-certs-629838 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:02 UTC │
	│ image   │ no-preload-300878 image list --format=json                                                                                                                                                                                                    │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:01 UTC │
	│ pause   │ -p no-preload-300878 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │                     │
	│ delete  │ -p no-preload-300878                                                                                                                                                                                                                          │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:01 UTC │
	│ delete  │ -p no-preload-300878                                                                                                                                                                                                                          │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:01 UTC │
	│ delete  │ -p disable-driver-mounts-230052                                                                                                                                                                                                               │ disable-driver-mounts-230052 │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │ 27 Oct 25 20:02 UTC │
	│ start   │ -p default-k8s-diff-port-073048 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-073048 │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │                     │
	│ image   │ embed-certs-629838 image list --format=json                                                                                                                                                                                                   │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │ 27 Oct 25 20:02 UTC │
	│ pause   │ -p embed-certs-629838 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 20:02:00
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 20:02:00.452179  466537 out.go:360] Setting OutFile to fd 1 ...
	I1027 20:02:00.452428  466537 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:02:00.452464  466537 out.go:374] Setting ErrFile to fd 2...
	I1027 20:02:00.452489  466537 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:02:00.452812  466537 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 20:02:00.453385  466537 out.go:368] Setting JSON to false
	I1027 20:02:00.454624  466537 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9873,"bootTime":1761585448,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1027 20:02:00.454761  466537 start.go:141] virtualization:  
	I1027 20:02:00.458836  466537 out.go:179] * [default-k8s-diff-port-073048] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 20:02:00.462551  466537 notify.go:220] Checking for updates...
	I1027 20:02:00.463063  466537 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 20:02:00.466222  466537 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 20:02:00.469387  466537 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 20:02:00.472510  466537 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-266035/.minikube
	I1027 20:02:00.475669  466537 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 20:02:00.478684  466537 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 20:02:00.482439  466537 config.go:182] Loaded profile config "embed-certs-629838": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:02:00.482576  466537 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 20:02:00.520247  466537 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 20:02:00.520455  466537 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 20:02:00.593093  466537 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-27 20:02:00.583744316 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 20:02:00.593208  466537 docker.go:318] overlay module found
	I1027 20:02:00.596422  466537 out.go:179] * Using the docker driver based on user configuration
	I1027 20:02:00.599446  466537 start.go:305] selected driver: docker
	I1027 20:02:00.599466  466537 start.go:925] validating driver "docker" against <nil>
	I1027 20:02:00.599480  466537 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 20:02:00.600231  466537 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 20:02:00.655639  466537 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-27 20:02:00.646894587 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 20:02:00.655788  466537 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1027 20:02:00.656021  466537 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 20:02:00.658955  466537 out.go:179] * Using Docker driver with root privileges
	I1027 20:02:00.661858  466537 cni.go:84] Creating CNI manager for ""
	I1027 20:02:00.661940  466537 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 20:02:00.661958  466537 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 20:02:00.662047  466537 start.go:349] cluster config:
	{Name:default-k8s-diff-port-073048 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-073048 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 20:02:00.665112  466537 out.go:179] * Starting "default-k8s-diff-port-073048" primary control-plane node in "default-k8s-diff-port-073048" cluster
	I1027 20:02:00.668059  466537 cache.go:123] Beginning downloading kic base image for docker with crio
	I1027 20:02:00.671159  466537 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 20:02:00.674097  466537 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 20:02:00.674160  466537 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1027 20:02:00.674173  466537 cache.go:58] Caching tarball of preloaded images
	I1027 20:02:00.674186  466537 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 20:02:00.674278  466537 preload.go:233] Found /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1027 20:02:00.674287  466537 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 20:02:00.674402  466537 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/config.json ...
	I1027 20:02:00.674428  466537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/config.json: {Name:mk7cebe9ec20daf0bb7cbc48e9425df7f73c402b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:02:00.698506  466537 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 20:02:00.698535  466537 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 20:02:00.698550  466537 cache.go:232] Successfully downloaded all kic artifacts
	I1027 20:02:00.698572  466537 start.go:360] acquireMachinesLock for default-k8s-diff-port-073048: {Name:mk90694371f699bc05745bfd1e2e3f9abdf20057 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 20:02:00.698674  466537 start.go:364] duration metric: took 85.905µs to acquireMachinesLock for "default-k8s-diff-port-073048"
	I1027 20:02:00.698707  466537 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-073048 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-073048 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 20:02:00.698773  466537 start.go:125] createHost starting for "" (driver="docker")
	W1027 20:02:01.198917  462995 pod_ready.go:104] pod "coredns-66bc5c9577-ch8jv" is not "Ready", error: <nil>
	W1027 20:02:03.699467  462995 pod_ready.go:104] pod "coredns-66bc5c9577-ch8jv" is not "Ready", error: <nil>
	I1027 20:02:00.702140  466537 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1027 20:02:00.702379  466537 start.go:159] libmachine.API.Create for "default-k8s-diff-port-073048" (driver="docker")
	I1027 20:02:00.702443  466537 client.go:168] LocalClient.Create starting
	I1027 20:02:00.702539  466537 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem
	I1027 20:02:00.702579  466537 main.go:141] libmachine: Decoding PEM data...
	I1027 20:02:00.702599  466537 main.go:141] libmachine: Parsing certificate...
	I1027 20:02:00.702657  466537 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem
	I1027 20:02:00.702681  466537 main.go:141] libmachine: Decoding PEM data...
	I1027 20:02:00.702692  466537 main.go:141] libmachine: Parsing certificate...
	I1027 20:02:00.703147  466537 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-073048 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1027 20:02:00.719434  466537 cli_runner.go:211] docker network inspect default-k8s-diff-port-073048 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1027 20:02:00.719516  466537 network_create.go:284] running [docker network inspect default-k8s-diff-port-073048] to gather additional debugging logs...
	I1027 20:02:00.719537  466537 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-073048
	W1027 20:02:00.735065  466537 cli_runner.go:211] docker network inspect default-k8s-diff-port-073048 returned with exit code 1
	I1027 20:02:00.735108  466537 network_create.go:287] error running [docker network inspect default-k8s-diff-port-073048]: docker network inspect default-k8s-diff-port-073048: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-073048 not found
	I1027 20:02:00.735123  466537 network_create.go:289] output of [docker network inspect default-k8s-diff-port-073048]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-073048 not found
	
	** /stderr **
	I1027 20:02:00.735222  466537 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 20:02:00.752390  466537 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-74ee89127400 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:aa:c7:29:bd:7a:a9} reservation:<nil>}
	I1027 20:02:00.752809  466537 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-9c57ca829ac8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:0a:24:08:53:d6:07} reservation:<nil>}
	I1027 20:02:00.753065  466537 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b7a7c45fd176 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f6:52:e6:cd:c8:a6} reservation:<nil>}
	I1027 20:02:00.753376  466537 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e98449378294 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:b2:53:5f:9c:fb:7f} reservation:<nil>}
	I1027 20:02:00.753812  466537 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a4a150}
	I1027 20:02:00.753837  466537 network_create.go:124] attempt to create docker network default-k8s-diff-port-073048 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1027 20:02:00.753896  466537 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-073048 default-k8s-diff-port-073048
	I1027 20:02:00.816664  466537 network_create.go:108] docker network default-k8s-diff-port-073048 192.168.85.0/24 created
	I1027 20:02:00.816698  466537 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-073048" container
	I1027 20:02:00.816797  466537 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1027 20:02:00.833151  466537 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-073048 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-073048 --label created_by.minikube.sigs.k8s.io=true
	I1027 20:02:00.851135  466537 oci.go:103] Successfully created a docker volume default-k8s-diff-port-073048
	I1027 20:02:00.851223  466537 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-073048-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-073048 --entrypoint /usr/bin/test -v default-k8s-diff-port-073048:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1027 20:02:01.453324  466537 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-073048
	I1027 20:02:01.453367  466537 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 20:02:01.453386  466537 kic.go:194] Starting extracting preloaded images to volume ...
	I1027 20:02:01.453470  466537 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-073048:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1027 20:02:06.198265  462995 pod_ready.go:104] pod "coredns-66bc5c9577-ch8jv" is not "Ready", error: <nil>
	W1027 20:02:08.199278  462995 pod_ready.go:104] pod "coredns-66bc5c9577-ch8jv" is not "Ready", error: <nil>
	I1027 20:02:05.905632  466537 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-073048:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.452119322s)
	I1027 20:02:05.905667  466537 kic.go:203] duration metric: took 4.452277226s to extract preloaded images to volume ...
	W1027 20:02:05.905824  466537 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1027 20:02:05.905933  466537 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1027 20:02:05.966810  466537 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-073048 --name default-k8s-diff-port-073048 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-073048 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-073048 --network default-k8s-diff-port-073048 --ip 192.168.85.2 --volume default-k8s-diff-port-073048:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1027 20:02:06.291608  466537 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-073048 --format={{.State.Running}}
	I1027 20:02:06.314583  466537 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-073048 --format={{.State.Status}}
	I1027 20:02:06.343605  466537 cli_runner.go:164] Run: docker exec default-k8s-diff-port-073048 stat /var/lib/dpkg/alternatives/iptables
	I1027 20:02:06.394651  466537 oci.go:144] the created container "default-k8s-diff-port-073048" has a running status.
	I1027 20:02:06.394678  466537 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21801-266035/.minikube/machines/default-k8s-diff-port-073048/id_rsa...
	I1027 20:02:07.030663  466537 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21801-266035/.minikube/machines/default-k8s-diff-port-073048/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1027 20:02:07.059290  466537 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-073048 --format={{.State.Status}}
	I1027 20:02:07.084914  466537 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1027 20:02:07.084933  466537 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-073048 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1027 20:02:07.145298  466537 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-073048 --format={{.State.Status}}
	I1027 20:02:07.170347  466537 machine.go:93] provisionDockerMachine start ...
	I1027 20:02:07.170474  466537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:02:07.200095  466537 main.go:141] libmachine: Using SSH client type: native
	I1027 20:02:07.200460  466537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1027 20:02:07.200474  466537 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 20:02:07.383137  466537 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-073048
	
	I1027 20:02:07.383203  466537 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-073048"
	I1027 20:02:07.383300  466537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:02:07.403582  466537 main.go:141] libmachine: Using SSH client type: native
	I1027 20:02:07.403919  466537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1027 20:02:07.403933  466537 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-073048 && echo "default-k8s-diff-port-073048" | sudo tee /etc/hostname
	I1027 20:02:07.571798  466537 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-073048
	
	I1027 20:02:07.571891  466537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:02:07.594442  466537 main.go:141] libmachine: Using SSH client type: native
	I1027 20:02:07.594825  466537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1027 20:02:07.594849  466537 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-073048' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-073048/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-073048' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 20:02:07.751355  466537 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 20:02:07.751379  466537 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21801-266035/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-266035/.minikube}
	I1027 20:02:07.751397  466537 ubuntu.go:190] setting up certificates
	I1027 20:02:07.751406  466537 provision.go:84] configureAuth start
	I1027 20:02:07.751466  466537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-073048
	I1027 20:02:07.769604  466537 provision.go:143] copyHostCerts
	I1027 20:02:07.769664  466537 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem, removing ...
	I1027 20:02:07.769673  466537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem
	I1027 20:02:07.769755  466537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem (1078 bytes)
	I1027 20:02:07.769860  466537 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem, removing ...
	I1027 20:02:07.769865  466537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem
	I1027 20:02:07.769891  466537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem (1123 bytes)
	I1027 20:02:07.769938  466537 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem, removing ...
	I1027 20:02:07.769943  466537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem
	I1027 20:02:07.769964  466537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem (1675 bytes)
	I1027 20:02:07.770015  466537 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-073048 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-073048 localhost minikube]
	I1027 20:02:08.051831  466537 provision.go:177] copyRemoteCerts
	I1027 20:02:08.051924  466537 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 20:02:08.051994  466537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:02:08.073631  466537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/default-k8s-diff-port-073048/id_rsa Username:docker}
	I1027 20:02:08.178858  466537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 20:02:08.199695  466537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1027 20:02:08.217983  466537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 20:02:08.237621  466537 provision.go:87] duration metric: took 486.188281ms to configureAuth
	I1027 20:02:08.237646  466537 ubuntu.go:206] setting minikube options for container-runtime
	I1027 20:02:08.237831  466537 config.go:182] Loaded profile config "default-k8s-diff-port-073048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:02:08.237935  466537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:02:08.255377  466537 main.go:141] libmachine: Using SSH client type: native
	I1027 20:02:08.255702  466537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1027 20:02:08.255721  466537 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 20:02:08.601258  466537 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 20:02:08.601282  466537 machine.go:96] duration metric: took 1.4309097s to provisionDockerMachine
	I1027 20:02:08.601293  466537 client.go:171] duration metric: took 7.898838782s to LocalClient.Create
	I1027 20:02:08.601306  466537 start.go:167] duration metric: took 7.898928479s to libmachine.API.Create "default-k8s-diff-port-073048"
	I1027 20:02:08.601313  466537 start.go:293] postStartSetup for "default-k8s-diff-port-073048" (driver="docker")
	I1027 20:02:08.601324  466537 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 20:02:08.601385  466537 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 20:02:08.601438  466537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:02:08.619078  466537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/default-k8s-diff-port-073048/id_rsa Username:docker}
	I1027 20:02:08.722875  466537 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 20:02:08.726098  466537 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 20:02:08.726137  466537 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 20:02:08.726148  466537 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-266035/.minikube/addons for local assets ...
	I1027 20:02:08.726217  466537 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-266035/.minikube/files for local assets ...
	I1027 20:02:08.726337  466537 filesync.go:149] local asset: /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem -> 2678802.pem in /etc/ssl/certs
	I1027 20:02:08.726442  466537 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 20:02:08.733702  466537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem --> /etc/ssl/certs/2678802.pem (1708 bytes)
	I1027 20:02:08.751930  466537 start.go:296] duration metric: took 150.600588ms for postStartSetup
	I1027 20:02:08.752363  466537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-073048
	I1027 20:02:08.769852  466537 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/config.json ...
	I1027 20:02:08.770252  466537 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 20:02:08.770325  466537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:02:08.786857  466537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/default-k8s-diff-port-073048/id_rsa Username:docker}
	I1027 20:02:08.888998  466537 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 20:02:08.893765  466537 start.go:128] duration metric: took 8.19497725s to createHost
	I1027 20:02:08.893790  466537 start.go:83] releasing machines lock for "default-k8s-diff-port-073048", held for 8.195107272s
	I1027 20:02:08.893871  466537 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-073048
	I1027 20:02:08.910520  466537 ssh_runner.go:195] Run: cat /version.json
	I1027 20:02:08.910577  466537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:02:08.910854  466537 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 20:02:08.910921  466537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:02:08.934390  466537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/default-k8s-diff-port-073048/id_rsa Username:docker}
	I1027 20:02:08.948997  466537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/default-k8s-diff-port-073048/id_rsa Username:docker}
	I1027 20:02:09.038927  466537 ssh_runner.go:195] Run: systemctl --version
	I1027 20:02:09.130963  466537 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 20:02:09.169154  466537 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 20:02:09.173902  466537 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 20:02:09.173973  466537 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 20:02:09.205761  466537 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1027 20:02:09.205787  466537 start.go:495] detecting cgroup driver to use...
	I1027 20:02:09.205832  466537 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1027 20:02:09.205900  466537 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 20:02:09.226547  466537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 20:02:09.242610  466537 docker.go:218] disabling cri-docker service (if available) ...
	I1027 20:02:09.242733  466537 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 20:02:09.263756  466537 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 20:02:09.283357  466537 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 20:02:09.395907  466537 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 20:02:09.517086  466537 docker.go:234] disabling docker service ...
	I1027 20:02:09.517370  466537 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 20:02:09.548430  466537 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 20:02:09.562448  466537 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 20:02:09.688470  466537 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 20:02:09.820298  466537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 20:02:09.833788  466537 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 20:02:09.851533  466537 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 20:02:09.851601  466537 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:02:09.861372  466537 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 20:02:09.861442  466537 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:02:09.870830  466537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:02:09.888518  466537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:02:09.898504  466537 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 20:02:09.907024  466537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:02:09.916366  466537 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:02:09.929829  466537 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:02:09.939307  466537 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 20:02:09.947022  466537 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 20:02:09.954205  466537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:02:10.076009  466537 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 20:02:10.225576  466537 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 20:02:10.225641  466537 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 20:02:10.229644  466537 start.go:563] Will wait 60s for crictl version
	I1027 20:02:10.229705  466537 ssh_runner.go:195] Run: which crictl
	I1027 20:02:10.234131  466537 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 20:02:10.266879  466537 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 20:02:10.267045  466537 ssh_runner.go:195] Run: crio --version
	I1027 20:02:10.298361  466537 ssh_runner.go:195] Run: crio --version
	I1027 20:02:10.331588  466537 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 20:02:10.334557  466537 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-073048 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 20:02:10.350671  466537 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1027 20:02:10.354806  466537 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 20:02:10.364147  466537 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-073048 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-073048 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 20:02:10.364257  466537 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 20:02:10.364323  466537 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 20:02:10.403274  466537 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 20:02:10.403306  466537 crio.go:433] Images already preloaded, skipping extraction
	I1027 20:02:10.403362  466537 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 20:02:10.433421  466537 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 20:02:10.433445  466537 cache_images.go:85] Images are preloaded, skipping loading
	I1027 20:02:10.433453  466537 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1027 20:02:10.433544  466537 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-073048 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-073048 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 20:02:10.433637  466537 ssh_runner.go:195] Run: crio config
	I1027 20:02:10.490910  466537 cni.go:84] Creating CNI manager for ""
	I1027 20:02:10.490933  466537 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 20:02:10.490946  466537 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 20:02:10.490970  466537 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-073048 NodeName:default-k8s-diff-port-073048 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 20:02:10.491135  466537 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-073048"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 20:02:10.491207  466537 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 20:02:10.499267  466537 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 20:02:10.499392  466537 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 20:02:10.507481  466537 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1027 20:02:10.520725  466537 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 20:02:10.535521  466537 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1027 20:02:10.549926  466537 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1027 20:02:10.553386  466537 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 20:02:10.562711  466537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:02:10.673102  466537 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 20:02:10.689608  466537 certs.go:69] Setting up /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048 for IP: 192.168.85.2
	I1027 20:02:10.689680  466537 certs.go:195] generating shared ca certs ...
	I1027 20:02:10.689710  466537 certs.go:227] acquiring lock for ca certs: {Name:mk172548fc54811b51040cc0201d01eb4b3dd19b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:02:10.689894  466537 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21801-266035/.minikube/ca.key
	I1027 20:02:10.689968  466537 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.key
	I1027 20:02:10.690002  466537 certs.go:257] generating profile certs ...
	I1027 20:02:10.690078  466537 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/client.key
	I1027 20:02:10.690117  466537 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/client.crt with IP's: []
	I1027 20:02:11.037755  466537 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/client.crt ...
	I1027 20:02:11.037788  466537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/client.crt: {Name:mk3dec30b7bddf618c0aaebf4bc94cceefd537a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:02:11.038025  466537 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/client.key ...
	I1027 20:02:11.038043  466537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/client.key: {Name:mkc9b9080434b4424244398f3ae4654bdc4244e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:02:11.038140  466537 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/apiserver.key.09593244
	I1027 20:02:11.038158  466537 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/apiserver.crt.09593244 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1027 20:02:11.407036  466537 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/apiserver.crt.09593244 ...
	I1027 20:02:11.407069  466537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/apiserver.crt.09593244: {Name:mkde7f8dae578d835a9db285b2d1f3af7707bef8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:02:11.407261  466537 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/apiserver.key.09593244 ...
	I1027 20:02:11.407276  466537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/apiserver.key.09593244: {Name:mk80a1cda5ee26a996ed938d2b709b9c54cecda5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:02:11.407363  466537 certs.go:382] copying /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/apiserver.crt.09593244 -> /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/apiserver.crt
	I1027 20:02:11.407443  466537 certs.go:386] copying /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/apiserver.key.09593244 -> /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/apiserver.key
	I1027 20:02:11.407503  466537 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/proxy-client.key
	I1027 20:02:11.407520  466537 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/proxy-client.crt with IP's: []
	I1027 20:02:11.637374  466537 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/proxy-client.crt ...
	I1027 20:02:11.637404  466537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/proxy-client.crt: {Name:mke2088307e623c0e909eee79396116c9ce51be5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:02:11.637591  466537 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/proxy-client.key ...
	I1027 20:02:11.637607  466537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/proxy-client.key: {Name:mk3024b96351873255027672b0fc172270e8409f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:02:11.637805  466537 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880.pem (1338 bytes)
	W1027 20:02:11.637845  466537 certs.go:480] ignoring /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880_empty.pem, impossibly tiny 0 bytes
	I1027 20:02:11.637860  466537 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 20:02:11.637889  466537 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem (1078 bytes)
	I1027 20:02:11.637916  466537 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem (1123 bytes)
	I1027 20:02:11.637942  466537 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem (1675 bytes)
	I1027 20:02:11.637988  466537 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem (1708 bytes)
	I1027 20:02:11.638552  466537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 20:02:11.658336  466537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 20:02:11.676394  466537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 20:02:11.697927  466537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 20:02:11.717890  466537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1027 20:02:11.735620  466537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 20:02:11.754553  466537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 20:02:11.772095  466537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 20:02:11.791754  466537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem --> /usr/share/ca-certificates/2678802.pem (1708 bytes)
	I1027 20:02:11.809450  466537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 20:02:11.827479  466537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880.pem --> /usr/share/ca-certificates/267880.pem (1338 bytes)
	I1027 20:02:11.844668  466537 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 20:02:11.856866  466537 ssh_runner.go:195] Run: openssl version
	I1027 20:02:11.862951  466537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 20:02:11.871148  466537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:02:11.874691  466537 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:02:11.874777  466537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:02:11.915862  466537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 20:02:11.924270  466537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/267880.pem && ln -fs /usr/share/ca-certificates/267880.pem /etc/ssl/certs/267880.pem"
	I1027 20:02:11.932658  466537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/267880.pem
	I1027 20:02:11.936668  466537 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 19:04 /usr/share/ca-certificates/267880.pem
	I1027 20:02:11.936755  466537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/267880.pem
	I1027 20:02:11.978278  466537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/267880.pem /etc/ssl/certs/51391683.0"
	I1027 20:02:11.987429  466537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2678802.pem && ln -fs /usr/share/ca-certificates/2678802.pem /etc/ssl/certs/2678802.pem"
	I1027 20:02:11.996236  466537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2678802.pem
	I1027 20:02:12.008101  466537 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 19:04 /usr/share/ca-certificates/2678802.pem
	I1027 20:02:12.008190  466537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2678802.pem
	I1027 20:02:12.050458  466537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2678802.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 20:02:12.060145  466537 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 20:02:12.063954  466537 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 20:02:12.064025  466537 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-073048 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-073048 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 20:02:12.064102  466537 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 20:02:12.064171  466537 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 20:02:12.096232  466537 cri.go:89] found id: ""
	I1027 20:02:12.096381  466537 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 20:02:12.108262  466537 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 20:02:12.117899  466537 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1027 20:02:12.117966  466537 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 20:02:12.129180  466537 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 20:02:12.129250  466537 kubeadm.go:157] found existing configuration files:
	
	I1027 20:02:12.129332  466537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1027 20:02:12.138753  466537 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 20:02:12.138864  466537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 20:02:12.146299  466537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1027 20:02:12.154399  466537 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 20:02:12.154491  466537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 20:02:12.161964  466537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1027 20:02:12.169675  466537 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 20:02:12.169745  466537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 20:02:12.177185  466537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1027 20:02:12.184880  466537 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 20:02:12.185002  466537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 20:02:12.192280  466537 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 20:02:12.237945  466537 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1027 20:02:12.238159  466537 kubeadm.go:318] [preflight] Running pre-flight checks
	I1027 20:02:12.270120  466537 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1027 20:02:12.270222  466537 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1027 20:02:12.270264  466537 kubeadm.go:318] OS: Linux
	I1027 20:02:12.270331  466537 kubeadm.go:318] CGROUPS_CPU: enabled
	I1027 20:02:12.270399  466537 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1027 20:02:12.270469  466537 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1027 20:02:12.270536  466537 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1027 20:02:12.270602  466537 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1027 20:02:12.270687  466537 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1027 20:02:12.270752  466537 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1027 20:02:12.270819  466537 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1027 20:02:12.270885  466537 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1027 20:02:12.352619  466537 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 20:02:12.352737  466537 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 20:02:12.352844  466537 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 20:02:12.361430  466537 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1027 20:02:10.709501  462995 pod_ready.go:104] pod "coredns-66bc5c9577-ch8jv" is not "Ready", error: <nil>
	W1027 20:02:13.199005  462995 pod_ready.go:104] pod "coredns-66bc5c9577-ch8jv" is not "Ready", error: <nil>
	I1027 20:02:12.367893  466537 out.go:252]   - Generating certificates and keys ...
	I1027 20:02:12.368001  466537 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1027 20:02:12.368076  466537 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1027 20:02:12.652566  466537 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 20:02:12.910174  466537 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1027 20:02:13.616264  466537 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1027 20:02:13.820838  466537 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1027 20:02:14.605092  466537 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1027 20:02:14.605754  466537 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-073048 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1027 20:02:15.277284  466537 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1027 20:02:15.277720  466537 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-073048 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	W1027 20:02:15.204785  462995 pod_ready.go:104] pod "coredns-66bc5c9577-ch8jv" is not "Ready", error: <nil>
	W1027 20:02:17.699172  462995 pod_ready.go:104] pod "coredns-66bc5c9577-ch8jv" is not "Ready", error: <nil>
	I1027 20:02:16.086237  466537 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 20:02:16.966896  466537 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 20:02:17.464452  466537 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1027 20:02:17.464744  466537 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 20:02:17.968981  466537 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 20:02:18.440834  466537 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 20:02:18.708868  466537 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 20:02:18.980130  466537 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 20:02:19.906232  466537 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 20:02:19.906884  466537 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 20:02:19.909805  466537 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 20:02:19.913120  466537 out.go:252]   - Booting up control plane ...
	I1027 20:02:19.913221  466537 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 20:02:19.913317  466537 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 20:02:19.914262  466537 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 20:02:19.932958  466537 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 20:02:19.933081  466537 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 20:02:19.941921  466537 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 20:02:19.942251  466537 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 20:02:19.942300  466537 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1027 20:02:20.103417  466537 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 20:02:20.103542  466537 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1027 20:02:19.699316  462995 pod_ready.go:104] pod "coredns-66bc5c9577-ch8jv" is not "Ready", error: <nil>
	I1027 20:02:21.698481  462995 pod_ready.go:94] pod "coredns-66bc5c9577-ch8jv" is "Ready"
	I1027 20:02:21.698555  462995 pod_ready.go:86] duration metric: took 37.006187785s for pod "coredns-66bc5c9577-ch8jv" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:02:21.702390  462995 pod_ready.go:83] waiting for pod "etcd-embed-certs-629838" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:02:21.708240  462995 pod_ready.go:94] pod "etcd-embed-certs-629838" is "Ready"
	I1027 20:02:21.708313  462995 pod_ready.go:86] duration metric: took 5.847869ms for pod "etcd-embed-certs-629838" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:02:21.712595  462995 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-629838" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:02:21.716381  462995 pod_ready.go:94] pod "kube-apiserver-embed-certs-629838" is "Ready"
	I1027 20:02:21.716401  462995 pod_ready.go:86] duration metric: took 3.786003ms for pod "kube-apiserver-embed-certs-629838" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:02:21.718384  462995 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-629838" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:02:21.896217  462995 pod_ready.go:94] pod "kube-controller-manager-embed-certs-629838" is "Ready"
	I1027 20:02:21.896286  462995 pod_ready.go:86] duration metric: took 177.884298ms for pod "kube-controller-manager-embed-certs-629838" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:02:22.100995  462995 pod_ready.go:83] waiting for pod "kube-proxy-bwql6" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:02:22.497174  462995 pod_ready.go:94] pod "kube-proxy-bwql6" is "Ready"
	I1027 20:02:22.497250  462995 pod_ready.go:86] duration metric: took 396.172576ms for pod "kube-proxy-bwql6" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:02:22.696148  462995 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-629838" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:02:23.100997  462995 pod_ready.go:94] pod "kube-scheduler-embed-certs-629838" is "Ready"
	I1027 20:02:23.101072  462995 pod_ready.go:86] duration metric: took 404.852145ms for pod "kube-scheduler-embed-certs-629838" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:02:23.101101  462995 pod_ready.go:40] duration metric: took 38.413556108s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 20:02:23.215007  462995 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 20:02:23.218010  462995 out.go:179] * Done! kubectl is now configured to use "embed-certs-629838" cluster and "default" namespace by default
	I1027 20:02:21.109506  466537 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.002230236s
	I1027 20:02:21.109646  466537 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 20:02:21.109746  466537 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1027 20:02:21.109849  466537 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 20:02:21.109956  466537 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 20:02:25.714464  466537 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.605095298s
	I1027 20:02:25.983914  466537 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.874849435s
	I1027 20:02:27.611230  466537 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.50209575s
	I1027 20:02:27.631954  466537 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 20:02:27.648446  466537 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 20:02:27.663147  466537 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 20:02:27.663384  466537 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-073048 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 20:02:27.681225  466537 kubeadm.go:318] [bootstrap-token] Using token: xsum8b.ramxdmytyu4idcni
	I1027 20:02:27.684323  466537 out.go:252]   - Configuring RBAC rules ...
	I1027 20:02:27.684453  466537 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 20:02:27.689088  466537 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 20:02:27.698096  466537 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 20:02:27.702404  466537 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 20:02:27.710374  466537 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 20:02:27.715128  466537 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 20:02:28.025333  466537 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 20:02:28.450941  466537 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1027 20:02:29.018078  466537 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1027 20:02:29.019750  466537 kubeadm.go:318] 
	I1027 20:02:29.019827  466537 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1027 20:02:29.019841  466537 kubeadm.go:318] 
	I1027 20:02:29.019923  466537 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1027 20:02:29.019931  466537 kubeadm.go:318] 
	I1027 20:02:29.019958  466537 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1027 20:02:29.020024  466537 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 20:02:29.020085  466537 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 20:02:29.020092  466537 kubeadm.go:318] 
	I1027 20:02:29.020148  466537 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1027 20:02:29.020156  466537 kubeadm.go:318] 
	I1027 20:02:29.020206  466537 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 20:02:29.020219  466537 kubeadm.go:318] 
	I1027 20:02:29.020274  466537 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1027 20:02:29.020355  466537 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 20:02:29.020430  466537 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 20:02:29.020439  466537 kubeadm.go:318] 
	I1027 20:02:29.020526  466537 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 20:02:29.020611  466537 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1027 20:02:29.020619  466537 kubeadm.go:318] 
	I1027 20:02:29.020706  466537 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token xsum8b.ramxdmytyu4idcni \
	I1027 20:02:29.020816  466537 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:9473f077605f6866eb08c8a712af7c56a0deb8dc2a907ec8cbb4b8ae396e8f1f \
	I1027 20:02:29.020840  466537 kubeadm.go:318] 	--control-plane 
	I1027 20:02:29.020851  466537 kubeadm.go:318] 
	I1027 20:02:29.020941  466537 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1027 20:02:29.020949  466537 kubeadm.go:318] 
	I1027 20:02:29.021034  466537 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token xsum8b.ramxdmytyu4idcni \
	I1027 20:02:29.021145  466537 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:9473f077605f6866eb08c8a712af7c56a0deb8dc2a907ec8cbb4b8ae396e8f1f 
	I1027 20:02:29.025629  466537 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1027 20:02:29.025865  466537 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1027 20:02:29.025979  466537 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 20:02:29.026017  466537 cni.go:84] Creating CNI manager for ""
	I1027 20:02:29.026029  466537 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 20:02:29.029252  466537 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1027 20:02:29.032290  466537 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1027 20:02:29.036632  466537 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 20:02:29.036655  466537 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1027 20:02:29.050047  466537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 20:02:29.361429  466537 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 20:02:29.361563  466537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:02:29.361663  466537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-073048 minikube.k8s.io/updated_at=2025_10_27T20_02_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f minikube.k8s.io/name=default-k8s-diff-port-073048 minikube.k8s.io/primary=true
	I1027 20:02:29.515886  466537 ops.go:34] apiserver oom_adj: -16
	I1027 20:02:29.515905  466537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:02:30.019982  466537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:02:30.516367  466537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:02:31.016003  466537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:02:31.516711  466537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:02:32.016020  466537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:02:32.515937  466537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:02:33.016021  466537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:02:33.516205  466537 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:02:33.620099  466537 kubeadm.go:1113] duration metric: took 4.258578635s to wait for elevateKubeSystemPrivileges
	I1027 20:02:33.620138  466537 kubeadm.go:402] duration metric: took 21.556116366s to StartCluster
	I1027 20:02:33.620156  466537 settings.go:142] acquiring lock: {Name:mk49b9e2e9b7901c6b51e89b8b1e8ee7f9e88107 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:02:33.620229  466537 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 20:02:33.621800  466537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/kubeconfig: {Name:mk0a03a594f8d9f9199b1c7cff7c550cec8414c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:02:33.622048  466537 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 20:02:33.622060  466537 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 20:02:33.622318  466537 config.go:182] Loaded profile config "default-k8s-diff-port-073048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:02:33.622365  466537 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 20:02:33.622446  466537 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-073048"
	I1027 20:02:33.622469  466537 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-073048"
	I1027 20:02:33.622498  466537 host.go:66] Checking if "default-k8s-diff-port-073048" exists ...
	I1027 20:02:33.622978  466537 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-073048 --format={{.State.Status}}
	I1027 20:02:33.623393  466537 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-073048"
	I1027 20:02:33.623412  466537 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-073048"
	I1027 20:02:33.623693  466537 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-073048 --format={{.State.Status}}
	I1027 20:02:33.627092  466537 out.go:179] * Verifying Kubernetes components...
	I1027 20:02:33.637147  466537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:02:33.673940  466537 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 20:02:33.675252  466537 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-073048"
	I1027 20:02:33.675295  466537 host.go:66] Checking if "default-k8s-diff-port-073048" exists ...
	I1027 20:02:33.675764  466537 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-073048 --format={{.State.Status}}
	I1027 20:02:33.676861  466537 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 20:02:33.676886  466537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 20:02:33.676937  466537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:02:33.740984  466537 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 20:02:33.741014  466537 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 20:02:33.741088  466537 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:02:33.757260  466537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/default-k8s-diff-port-073048/id_rsa Username:docker}
	I1027 20:02:33.776639  466537 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/default-k8s-diff-port-073048/id_rsa Username:docker}
	I1027 20:02:33.995598  466537 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 20:02:33.995724  466537 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 20:02:34.050633  466537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 20:02:34.111720  466537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 20:02:34.564110  466537 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-073048" to be "Ready" ...
	I1027 20:02:34.564487  466537 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1027 20:02:34.995088  466537 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1027 20:02:35.007861  466537 addons.go:514] duration metric: took 1.385460103s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1027 20:02:35.069913  466537 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-073048" context rescaled to 1 replicas
	W1027 20:02:36.567032  466537 node_ready.go:57] node "default-k8s-diff-port-073048" has "Ready":"False" status (will retry)
	W1027 20:02:38.568149  466537 node_ready.go:57] node "default-k8s-diff-port-073048" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 27 20:02:15 embed-certs-629838 crio[648]: time="2025-10-27T20:02:15.017995605Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:02:15 embed-certs-629838 crio[648]: time="2025-10-27T20:02:15.038041616Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:02:15 embed-certs-629838 crio[648]: time="2025-10-27T20:02:15.038830632Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:02:15 embed-certs-629838 crio[648]: time="2025-10-27T20:02:15.067868202Z" level=info msg="Created container 05a28f65e7306f32e6fe44d10f1eb7513a2f00be81d35bf861d871965164280c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ddw8x/dashboard-metrics-scraper" id=33ad5ee1-89e2-43cd-ab58-33d0882491ca name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 20:02:15 embed-certs-629838 crio[648]: time="2025-10-27T20:02:15.072471741Z" level=info msg="Starting container: 05a28f65e7306f32e6fe44d10f1eb7513a2f00be81d35bf861d871965164280c" id=e430a4ca-f8b1-4971-8063-565aab7f51d2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 20:02:15 embed-certs-629838 crio[648]: time="2025-10-27T20:02:15.080133171Z" level=info msg="Started container" PID=1638 containerID=05a28f65e7306f32e6fe44d10f1eb7513a2f00be81d35bf861d871965164280c description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ddw8x/dashboard-metrics-scraper id=e430a4ca-f8b1-4971-8063-565aab7f51d2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=101807dc653a62bf806f9e8df5e5c4ce035d4ffa3f3b030a4bee4cd940cd1e3b
	Oct 27 20:02:15 embed-certs-629838 conmon[1636]: conmon 05a28f65e7306f32e6fe <ninfo>: container 1638 exited with status 1
	Oct 27 20:02:15 embed-certs-629838 crio[648]: time="2025-10-27T20:02:15.252110655Z" level=info msg="Removing container: 8c895b40bc07fab1aa6283121a31d2bbc6b09d35a5b8c241b7d9592dba32c1fe" id=c5913882-fe40-4aee-a6fc-b4ec1ff4cf8a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 20:02:15 embed-certs-629838 crio[648]: time="2025-10-27T20:02:15.265041198Z" level=info msg="Error loading conmon cgroup of container 8c895b40bc07fab1aa6283121a31d2bbc6b09d35a5b8c241b7d9592dba32c1fe: cgroup deleted" id=c5913882-fe40-4aee-a6fc-b4ec1ff4cf8a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 20:02:15 embed-certs-629838 crio[648]: time="2025-10-27T20:02:15.272668078Z" level=info msg="Removed container 8c895b40bc07fab1aa6283121a31d2bbc6b09d35a5b8c241b7d9592dba32c1fe: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ddw8x/dashboard-metrics-scraper" id=c5913882-fe40-4aee-a6fc-b4ec1ff4cf8a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 20:02:23 embed-certs-629838 crio[648]: time="2025-10-27T20:02:23.824429493Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 20:02:23 embed-certs-629838 crio[648]: time="2025-10-27T20:02:23.833500098Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 20:02:23 embed-certs-629838 crio[648]: time="2025-10-27T20:02:23.833691026Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 20:02:23 embed-certs-629838 crio[648]: time="2025-10-27T20:02:23.833836655Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 20:02:23 embed-certs-629838 crio[648]: time="2025-10-27T20:02:23.846282723Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 20:02:23 embed-certs-629838 crio[648]: time="2025-10-27T20:02:23.846515734Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 20:02:23 embed-certs-629838 crio[648]: time="2025-10-27T20:02:23.846631964Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 20:02:23 embed-certs-629838 crio[648]: time="2025-10-27T20:02:23.855657697Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 20:02:23 embed-certs-629838 crio[648]: time="2025-10-27T20:02:23.855862729Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 20:02:23 embed-certs-629838 crio[648]: time="2025-10-27T20:02:23.855957307Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 20:02:23 embed-certs-629838 crio[648]: time="2025-10-27T20:02:23.862164231Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 20:02:23 embed-certs-629838 crio[648]: time="2025-10-27T20:02:23.86237316Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 20:02:23 embed-certs-629838 crio[648]: time="2025-10-27T20:02:23.862477453Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 20:02:23 embed-certs-629838 crio[648]: time="2025-10-27T20:02:23.871801384Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 20:02:23 embed-certs-629838 crio[648]: time="2025-10-27T20:02:23.871973056Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	05a28f65e7306       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago       Exited              dashboard-metrics-scraper   2                   101807dc653a6       dashboard-metrics-scraper-6ffb444bf9-ddw8x   kubernetes-dashboard
	e2942edb63853       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           26 seconds ago       Running             storage-provisioner         2                   7b41b223ce06c       storage-provisioner                          kube-system
	098bd45d59457       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   42 seconds ago       Running             kubernetes-dashboard        0                   3d6d5b98e1974       kubernetes-dashboard-855c9754f9-zplzg        kubernetes-dashboard
	b64abd248b5af       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           57 seconds ago       Running             busybox                     1                   8cbe3951908ef       busybox                                      default
	36598828b97e6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           57 seconds ago       Running             coredns                     1                   6defc14652a78       coredns-66bc5c9577-ch8jv                     kube-system
	82adb2c58510b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           57 seconds ago       Exited              storage-provisioner         1                   7b41b223ce06c       storage-provisioner                          kube-system
	fff55bbbe9a89       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           57 seconds ago       Running             kube-proxy                  1                   0a3faf612e1b2       kube-proxy-bwql6                             kube-system
	1ec2027ec7db3       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           57 seconds ago       Running             kindnet-cni                 1                   5b8d10ca9acfa       kindnet-cfqpk                                kube-system
	62e147a7d6f68       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   ba99f7ad335af       kube-scheduler-embed-certs-629838            kube-system
	d4ab5323a8b08       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   f25cb03f02ff1       kube-controller-manager-embed-certs-629838   kube-system
	b939b6634b4d0       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   bd06cd33f97af       etcd-embed-certs-629838                      kube-system
	0ad8adca28e83       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   6d9a9e16144a8       kube-apiserver-embed-certs-629838            kube-system
	
	
	==> coredns [36598828b97e660c2e2764dd87dc4bc9566a206293908f2e298358a5c6ba4a21] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40424 - 10862 "HINFO IN 1449722674201347895.966792375961270052. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.021967729s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-629838
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-629838
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=embed-certs-629838
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T20_00_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 20:00:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-629838
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 20:02:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 20:02:13 +0000   Mon, 27 Oct 2025 20:00:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 20:02:13 +0000   Mon, 27 Oct 2025 20:00:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 20:02:13 +0000   Mon, 27 Oct 2025 20:00:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 20:02:13 +0000   Mon, 27 Oct 2025 20:01:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-629838
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                6cfa2846-7c31-4e89-9dcc-f2fbb567f43d
	  Boot ID:                    23ea05b4-8203-4fb9-a84a-5deb4b091cbb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-ch8jv                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m19s
	  kube-system                 etcd-embed-certs-629838                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m24s
	  kube-system                 kindnet-cfqpk                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m20s
	  kube-system                 kube-apiserver-embed-certs-629838             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-controller-manager-embed-certs-629838    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-proxy-bwql6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-scheduler-embed-certs-629838             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-ddw8x    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-zplzg         0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m17s                  kube-proxy       
	  Normal   Starting                 56s                    kube-proxy       
	  Normal   Starting                 2m32s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m32s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m32s (x8 over 2m32s)  kubelet          Node embed-certs-629838 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m32s (x8 over 2m32s)  kubelet          Node embed-certs-629838 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m32s (x8 over 2m32s)  kubelet          Node embed-certs-629838 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m24s                  kubelet          Node embed-certs-629838 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m24s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m24s                  kubelet          Node embed-certs-629838 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m24s                  kubelet          Node embed-certs-629838 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m24s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m20s                  node-controller  Node embed-certs-629838 event: Registered Node embed-certs-629838 in Controller
	  Normal   NodeReady                98s                    kubelet          Node embed-certs-629838 status is now: NodeReady
	  Normal   Starting                 65s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 65s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  64s (x8 over 65s)      kubelet          Node embed-certs-629838 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    64s (x8 over 65s)      kubelet          Node embed-certs-629838 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     64s (x8 over 65s)      kubelet          Node embed-certs-629838 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                    node-controller  Node embed-certs-629838 event: Registered Node embed-certs-629838 in Controller
	
	
	==> dmesg <==
	[Oct27 19:38] overlayfs: idmapped layers are currently not supported
	[Oct27 19:39] overlayfs: idmapped layers are currently not supported
	[Oct27 19:40] overlayfs: idmapped layers are currently not supported
	[  +7.015891] overlayfs: idmapped layers are currently not supported
	[Oct27 19:41] overlayfs: idmapped layers are currently not supported
	[ +28.455216] overlayfs: idmapped layers are currently not supported
	[Oct27 19:42] overlayfs: idmapped layers are currently not supported
	[ +26.490174] overlayfs: idmapped layers are currently not supported
	[Oct27 19:44] overlayfs: idmapped layers are currently not supported
	[Oct27 19:45] overlayfs: idmapped layers are currently not supported
	[Oct27 19:47] overlayfs: idmapped layers are currently not supported
	[Oct27 19:49] overlayfs: idmapped layers are currently not supported
	[ +31.410335] overlayfs: idmapped layers are currently not supported
	[Oct27 19:51] overlayfs: idmapped layers are currently not supported
	[Oct27 19:53] overlayfs: idmapped layers are currently not supported
	[Oct27 19:54] overlayfs: idmapped layers are currently not supported
	[Oct27 19:55] overlayfs: idmapped layers are currently not supported
	[Oct27 19:56] overlayfs: idmapped layers are currently not supported
	[Oct27 19:57] overlayfs: idmapped layers are currently not supported
	[Oct27 19:58] overlayfs: idmapped layers are currently not supported
	[Oct27 19:59] overlayfs: idmapped layers are currently not supported
	[Oct27 20:00] overlayfs: idmapped layers are currently not supported
	[ +41.321877] overlayfs: idmapped layers are currently not supported
	[Oct27 20:01] overlayfs: idmapped layers are currently not supported
	[Oct27 20:02] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b939b6634b4d016b4989e6e47aa2060602b0e9582bcbe6e92ed906e4e1c2d5b5] <==
	{"level":"warn","ts":"2025-10-27T20:01:40.301779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.340613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.379137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.415449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.430536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.459333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.492681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.517181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.563565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.620332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.639505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.653997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.669508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.693957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.732973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.767406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.788467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.835862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.837560Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.861031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.880382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.914269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.939692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:40.975945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:01:41.067820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50798","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:02:41 up  2:45,  0 user,  load average: 2.59, 2.93, 2.63
	Linux embed-certs-629838 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1ec2027ec7db30a16a7dcd76b4cab74cf0f0fee27a8a8159c50979c610bb6883] <==
	I1027 20:01:43.645211       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 20:01:43.645831       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1027 20:01:43.646078       1 main.go:148] setting mtu 1500 for CNI 
	I1027 20:01:43.646120       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 20:01:43.646182       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T20:01:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 20:01:43.824551       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 20:01:43.824620       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 20:01:43.824654       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 20:01:43.825337       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1027 20:02:13.825541       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1027 20:02:13.825663       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1027 20:02:13.825738       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1027 20:02:13.825813       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1027 20:02:15.425152       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 20:02:15.425212       1 metrics.go:72] Registering metrics
	I1027 20:02:15.425292       1 controller.go:711] "Syncing nftables rules"
	I1027 20:02:23.824159       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1027 20:02:23.824209       1 main.go:301] handling current node
	I1027 20:02:33.831095       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1027 20:02:33.831202       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0ad8adca28e83fc64435c7179ec7d5ad6ddbf98b75047dd679977005035865ac] <==
	I1027 20:01:42.545981       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 20:01:42.590388       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1027 20:01:42.590529       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1027 20:01:42.591708       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1027 20:01:42.597902       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1027 20:01:42.597973       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1027 20:01:42.601140       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1027 20:01:42.601400       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1027 20:01:42.601417       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1027 20:01:42.602030       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 20:01:42.602064       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1027 20:01:42.614579       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1027 20:01:42.616688       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 20:01:42.636413       1 cache.go:39] Caches are synced for autoregister controller
	I1027 20:01:43.026630       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 20:01:43.105832       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 20:01:43.382051       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 20:01:43.538424       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 20:01:43.628477       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 20:01:43.700433       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 20:01:44.056431       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.239.19"}
	I1027 20:01:44.122398       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.145.67"}
	I1027 20:01:46.301883       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 20:01:46.352954       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 20:01:46.519191       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [d4ab5323a8b08831eb177e903d21bbe79ed018f63e07293ef7e383fc941ad31f] <==
	I1027 20:01:45.909597       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1027 20:01:45.909658       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 20:01:45.912906       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1027 20:01:45.912966       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1027 20:01:45.914098       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1027 20:01:45.914114       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1027 20:01:45.914124       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1027 20:01:45.915219       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1027 20:01:45.917465       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1027 20:01:45.918702       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 20:01:45.921913       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1027 20:01:45.923552       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1027 20:01:45.925019       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 20:01:45.928270       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 20:01:45.930479       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1027 20:01:45.932788       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 20:01:45.940700       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 20:01:45.945846       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 20:01:45.946468       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1027 20:01:45.946663       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1027 20:01:45.946834       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1027 20:01:45.948992       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 20:01:45.949318       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 20:01:45.953902       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 20:01:45.964708       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	
	
	==> kube-proxy [fff55bbbe9a89678e38123d75f96796776f16fde5fd525adf749545f93e256a0] <==
	I1027 20:01:44.065723       1 server_linux.go:53] "Using iptables proxy"
	I1027 20:01:44.285895       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 20:01:44.392470       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 20:01:44.394111       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1027 20:01:44.394262       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 20:01:44.425543       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 20:01:44.425603       1 server_linux.go:132] "Using iptables Proxier"
	I1027 20:01:44.432720       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 20:01:44.433212       1 server.go:527] "Version info" version="v1.34.1"
	I1027 20:01:44.433447       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 20:01:44.434682       1 config.go:200] "Starting service config controller"
	I1027 20:01:44.434906       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 20:01:44.434976       1 config.go:106] "Starting endpoint slice config controller"
	I1027 20:01:44.435093       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 20:01:44.435133       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 20:01:44.435161       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 20:01:44.435822       1 config.go:309] "Starting node config controller"
	I1027 20:01:44.435891       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 20:01:44.435923       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 20:01:44.535756       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 20:01:44.535861       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 20:01:44.535886       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [62e147a7d6f68f8e6cb774b163bbf47eeada86a4f24470d5ec3b80cc23844557] <==
	I1027 20:01:40.602866       1 serving.go:386] Generated self-signed cert in-memory
	I1027 20:01:44.092976       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 20:01:44.093002       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 20:01:44.105970       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1027 20:01:44.106159       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1027 20:01:44.106243       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 20:01:44.106275       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 20:01:44.106341       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 20:01:44.106370       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 20:01:44.107932       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 20:01:44.108010       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 20:01:44.206862       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 20:01:44.207025       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1027 20:01:44.207174       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 20:01:46 embed-certs-629838 kubelet[774]: I1027 20:01:46.536422     774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f64c87eb-43dd-4b88-b7e7-32467fb2e83d-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-ddw8x\" (UID: \"f64c87eb-43dd-4b88-b7e7-32467fb2e83d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ddw8x"
	Oct 27 20:01:46 embed-certs-629838 kubelet[774]: W1027 20:01:46.830665     774 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c4f57eb9d97cb78db15678f52105eae7c06964f89c91a987456d1c3d7cf90fa0/crio-101807dc653a62bf806f9e8df5e5c4ce035d4ffa3f3b030a4bee4cd940cd1e3b WatchSource:0}: Error finding container 101807dc653a62bf806f9e8df5e5c4ce035d4ffa3f3b030a4bee4cd940cd1e3b: Status 404 returned error can't find the container with id 101807dc653a62bf806f9e8df5e5c4ce035d4ffa3f3b030a4bee4cd940cd1e3b
	Oct 27 20:01:46 embed-certs-629838 kubelet[774]: W1027 20:01:46.847229     774 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/c4f57eb9d97cb78db15678f52105eae7c06964f89c91a987456d1c3d7cf90fa0/crio-3d6d5b98e197469779a897c8b16efc5624c2c2f39e8b6d20bce4ae5c95bfae38 WatchSource:0}: Error finding container 3d6d5b98e197469779a897c8b16efc5624c2c2f39e8b6d20bce4ae5c95bfae38: Status 404 returned error can't find the container with id 3d6d5b98e197469779a897c8b16efc5624c2c2f39e8b6d20bce4ae5c95bfae38
	Oct 27 20:01:51 embed-certs-629838 kubelet[774]: I1027 20:01:51.535351     774 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 27 20:01:53 embed-certs-629838 kubelet[774]: I1027 20:01:53.126673     774 scope.go:117] "RemoveContainer" containerID="51093c6b50d7ba6eaef141798fa97b57e350b938feeb3059bc670a50e0635c33"
	Oct 27 20:01:54 embed-certs-629838 kubelet[774]: I1027 20:01:54.175907     774 scope.go:117] "RemoveContainer" containerID="8c895b40bc07fab1aa6283121a31d2bbc6b09d35a5b8c241b7d9592dba32c1fe"
	Oct 27 20:01:54 embed-certs-629838 kubelet[774]: E1027 20:01:54.176105     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ddw8x_kubernetes-dashboard(f64c87eb-43dd-4b88-b7e7-32467fb2e83d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ddw8x" podUID="f64c87eb-43dd-4b88-b7e7-32467fb2e83d"
	Oct 27 20:01:54 embed-certs-629838 kubelet[774]: I1027 20:01:54.179240     774 scope.go:117] "RemoveContainer" containerID="51093c6b50d7ba6eaef141798fa97b57e350b938feeb3059bc670a50e0635c33"
	Oct 27 20:01:55 embed-certs-629838 kubelet[774]: I1027 20:01:55.182893     774 scope.go:117] "RemoveContainer" containerID="8c895b40bc07fab1aa6283121a31d2bbc6b09d35a5b8c241b7d9592dba32c1fe"
	Oct 27 20:01:55 embed-certs-629838 kubelet[774]: E1027 20:01:55.188004     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ddw8x_kubernetes-dashboard(f64c87eb-43dd-4b88-b7e7-32467fb2e83d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ddw8x" podUID="f64c87eb-43dd-4b88-b7e7-32467fb2e83d"
	Oct 27 20:01:59 embed-certs-629838 kubelet[774]: I1027 20:01:59.856497     774 scope.go:117] "RemoveContainer" containerID="8c895b40bc07fab1aa6283121a31d2bbc6b09d35a5b8c241b7d9592dba32c1fe"
	Oct 27 20:01:59 embed-certs-629838 kubelet[774]: E1027 20:01:59.856679     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ddw8x_kubernetes-dashboard(f64c87eb-43dd-4b88-b7e7-32467fb2e83d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ddw8x" podUID="f64c87eb-43dd-4b88-b7e7-32467fb2e83d"
	Oct 27 20:02:14 embed-certs-629838 kubelet[774]: I1027 20:02:14.243680     774 scope.go:117] "RemoveContainer" containerID="82adb2c58510baa0e605b645b22fc5bef69cd39b85ce04adde1bd01d6a2127b3"
	Oct 27 20:02:14 embed-certs-629838 kubelet[774]: I1027 20:02:14.275721     774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zplzg" podStartSLOduration=16.043386189 podStartE2EDuration="28.275693921s" podCreationTimestamp="2025-10-27 20:01:46 +0000 UTC" firstStartedPulling="2025-10-27 20:01:46.851470165 +0000 UTC m=+10.126249646" lastFinishedPulling="2025-10-27 20:01:59.083777896 +0000 UTC m=+22.358557378" observedRunningTime="2025-10-27 20:01:59.224116346 +0000 UTC m=+22.498895877" watchObservedRunningTime="2025-10-27 20:02:14.275693921 +0000 UTC m=+37.550473411"
	Oct 27 20:02:15 embed-certs-629838 kubelet[774]: I1027 20:02:15.002538     774 scope.go:117] "RemoveContainer" containerID="8c895b40bc07fab1aa6283121a31d2bbc6b09d35a5b8c241b7d9592dba32c1fe"
	Oct 27 20:02:15 embed-certs-629838 kubelet[774]: I1027 20:02:15.247988     774 scope.go:117] "RemoveContainer" containerID="8c895b40bc07fab1aa6283121a31d2bbc6b09d35a5b8c241b7d9592dba32c1fe"
	Oct 27 20:02:15 embed-certs-629838 kubelet[774]: I1027 20:02:15.248270     774 scope.go:117] "RemoveContainer" containerID="05a28f65e7306f32e6fe44d10f1eb7513a2f00be81d35bf861d871965164280c"
	Oct 27 20:02:15 embed-certs-629838 kubelet[774]: E1027 20:02:15.248508     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ddw8x_kubernetes-dashboard(f64c87eb-43dd-4b88-b7e7-32467fb2e83d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ddw8x" podUID="f64c87eb-43dd-4b88-b7e7-32467fb2e83d"
	Oct 27 20:02:19 embed-certs-629838 kubelet[774]: I1027 20:02:19.856535     774 scope.go:117] "RemoveContainer" containerID="05a28f65e7306f32e6fe44d10f1eb7513a2f00be81d35bf861d871965164280c"
	Oct 27 20:02:19 embed-certs-629838 kubelet[774]: E1027 20:02:19.857194     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ddw8x_kubernetes-dashboard(f64c87eb-43dd-4b88-b7e7-32467fb2e83d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ddw8x" podUID="f64c87eb-43dd-4b88-b7e7-32467fb2e83d"
	Oct 27 20:02:35 embed-certs-629838 kubelet[774]: I1027 20:02:35.003210     774 scope.go:117] "RemoveContainer" containerID="05a28f65e7306f32e6fe44d10f1eb7513a2f00be81d35bf861d871965164280c"
	Oct 27 20:02:35 embed-certs-629838 kubelet[774]: E1027 20:02:35.004288     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ddw8x_kubernetes-dashboard(f64c87eb-43dd-4b88-b7e7-32467fb2e83d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ddw8x" podUID="f64c87eb-43dd-4b88-b7e7-32467fb2e83d"
	Oct 27 20:02:35 embed-certs-629838 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 20:02:35 embed-certs-629838 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 20:02:35 embed-certs-629838 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [098bd45d59457750f47cfad92e238708c4767cf897a1ae3b97f78333c2fee810] <==
	2025/10/27 20:01:59 Using namespace: kubernetes-dashboard
	2025/10/27 20:01:59 Using in-cluster config to connect to apiserver
	2025/10/27 20:01:59 Using secret token for csrf signing
	2025/10/27 20:01:59 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/27 20:01:59 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/27 20:01:59 Successful initial request to the apiserver, version: v1.34.1
	2025/10/27 20:01:59 Generating JWE encryption key
	2025/10/27 20:01:59 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/27 20:01:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/27 20:01:59 Initializing JWE encryption key from synchronized object
	2025/10/27 20:01:59 Creating in-cluster Sidecar client
	2025/10/27 20:01:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 20:01:59 Serving insecurely on HTTP port: 9090
	2025/10/27 20:02:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 20:01:59 Starting overwatch
	
	
	==> storage-provisioner [82adb2c58510baa0e605b645b22fc5bef69cd39b85ce04adde1bd01d6a2127b3] <==
	I1027 20:01:43.944628       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1027 20:02:13.946432       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e2942edb63853d027badc94c1e3181d04390c3297cb5ee2e519fc254c1c790eb] <==
	I1027 20:02:14.338950       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 20:02:14.339081       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1027 20:02:14.341796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:02:17.797186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:02:22.057327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:02:25.655911       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:02:28.709884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:02:31.732100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:02:31.738960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 20:02:31.739242       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 20:02:31.739426       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-629838_b895aab8-e8a0-4c22-a642-6e2cb0a2690c!
	I1027 20:02:31.739482       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2ebd0f97-21e2-431a-a333-48d0485c417f", APIVersion:"v1", ResourceVersion:"684", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-629838_b895aab8-e8a0-4c22-a642-6e2cb0a2690c became leader
	W1027 20:02:31.745986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:02:31.749566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 20:02:31.839819       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-629838_b895aab8-e8a0-4c22-a642-6e2cb0a2690c!
	W1027 20:02:33.758967       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:02:33.771831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:02:35.775213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:02:35.781445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:02:37.791456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:02:37.798734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:02:39.802154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:02:39.807589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:02:41.810632       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:02:41.815155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-629838 -n embed-certs-629838
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-629838 -n embed-certs-629838: exit status 2 (440.561414ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-629838 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (7.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-073048 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-073048 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (335.782098ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T20:03:27Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-073048 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-073048 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-073048 describe deploy/metrics-server -n kube-system: exit status 1 (107.481615ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-073048 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-073048
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-073048:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0d0a6d2c139cfccb44f717c1d4bf1de32b26fb3f98fc7320c9146a486f5ddfbb",
	        "Created": "2025-10-27T20:02:05.981897269Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 466922,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T20:02:06.056144796Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/0d0a6d2c139cfccb44f717c1d4bf1de32b26fb3f98fc7320c9146a486f5ddfbb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0d0a6d2c139cfccb44f717c1d4bf1de32b26fb3f98fc7320c9146a486f5ddfbb/hostname",
	        "HostsPath": "/var/lib/docker/containers/0d0a6d2c139cfccb44f717c1d4bf1de32b26fb3f98fc7320c9146a486f5ddfbb/hosts",
	        "LogPath": "/var/lib/docker/containers/0d0a6d2c139cfccb44f717c1d4bf1de32b26fb3f98fc7320c9146a486f5ddfbb/0d0a6d2c139cfccb44f717c1d4bf1de32b26fb3f98fc7320c9146a486f5ddfbb-json.log",
	        "Name": "/default-k8s-diff-port-073048",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-073048:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-073048",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0d0a6d2c139cfccb44f717c1d4bf1de32b26fb3f98fc7320c9146a486f5ddfbb",
	                "LowerDir": "/var/lib/docker/overlay2/7b35ec25c436fa4e076bd86f8611b944f0fb1a3f41e654812ca0a32f05e4bb57-init/diff:/var/lib/docker/overlay2/0567218bbe05e8b59b2aef7ad82032d6716040c1f9d9d73e7de8682101079f76/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7b35ec25c436fa4e076bd86f8611b944f0fb1a3f41e654812ca0a32f05e4bb57/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7b35ec25c436fa4e076bd86f8611b944f0fb1a3f41e654812ca0a32f05e4bb57/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7b35ec25c436fa4e076bd86f8611b944f0fb1a3f41e654812ca0a32f05e4bb57/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-073048",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-073048/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-073048",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-073048",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-073048",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d13a6f75e735fc54178ae3c4ae6a582df0ada4e23474d94bda01e29f1fa28c19",
	            "SandboxKey": "/var/run/docker/netns/d13a6f75e735",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-073048": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:b5:d1:27:d7:04",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "693360b70a0a6dc4cb15a9fc19e2d3b83d1fde9de38ebc7c4ce28555e19407c1",
	                    "EndpointID": "d4ec5be9ad7573dc540cd928d0b8380baa2016a074474224f0b1d380c1e764e4",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-073048",
	                        "0d0a6d2c139c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-073048 -n default-k8s-diff-port-073048
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-073048 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-073048 logs -n 25: (1.56165421s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p old-k8s-version-942644 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-942644       │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │                     │
	│ delete  │ -p old-k8s-version-942644                                                                                                                                                                                                                     │ old-k8s-version-942644       │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │ 27 Oct 25 19:59 UTC │
	│ delete  │ -p old-k8s-version-942644                                                                                                                                                                                                                     │ old-k8s-version-942644       │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │ 27 Oct 25 19:59 UTC │
	│ start   │ -p embed-certs-629838 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │ 27 Oct 25 20:01 UTC │
	│ addons  │ enable metrics-server -p no-preload-300878 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:00 UTC │                     │
	│ stop    │ -p no-preload-300878 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:00 UTC │ 27 Oct 25 20:00 UTC │
	│ addons  │ enable dashboard -p no-preload-300878 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:00 UTC │ 27 Oct 25 20:00 UTC │
	│ start   │ -p no-preload-300878 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:00 UTC │ 27 Oct 25 20:01 UTC │
	│ addons  │ enable metrics-server -p embed-certs-629838 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │                     │
	│ stop    │ -p embed-certs-629838 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:01 UTC │
	│ addons  │ enable dashboard -p embed-certs-629838 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:01 UTC │
	│ start   │ -p embed-certs-629838 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:02 UTC │
	│ image   │ no-preload-300878 image list --format=json                                                                                                                                                                                                    │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:01 UTC │
	│ pause   │ -p no-preload-300878 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │                     │
	│ delete  │ -p no-preload-300878                                                                                                                                                                                                                          │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:01 UTC │
	│ delete  │ -p no-preload-300878                                                                                                                                                                                                                          │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:01 UTC │
	│ delete  │ -p disable-driver-mounts-230052                                                                                                                                                                                                               │ disable-driver-mounts-230052 │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │ 27 Oct 25 20:02 UTC │
	│ start   │ -p default-k8s-diff-port-073048 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-073048 │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │ 27 Oct 25 20:03 UTC │
	│ image   │ embed-certs-629838 image list --format=json                                                                                                                                                                                                   │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │ 27 Oct 25 20:02 UTC │
	│ pause   │ -p embed-certs-629838 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │                     │
	│ delete  │ -p embed-certs-629838                                                                                                                                                                                                                         │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │ 27 Oct 25 20:02 UTC │
	│ delete  │ -p embed-certs-629838                                                                                                                                                                                                                         │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │ 27 Oct 25 20:02 UTC │
	│ start   │ -p newest-cni-702588 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-702588            │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │ 27 Oct 25 20:03 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-073048 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-073048 │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-702588 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-702588            │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 20:02:45
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 20:02:45.785716  470518 out.go:360] Setting OutFile to fd 1 ...
	I1027 20:02:45.785895  470518 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:02:45.785905  470518 out.go:374] Setting ErrFile to fd 2...
	I1027 20:02:45.785910  470518 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:02:45.786184  470518 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 20:02:45.786610  470518 out.go:368] Setting JSON to false
	I1027 20:02:45.787737  470518 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9918,"bootTime":1761585448,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1027 20:02:45.787809  470518 start.go:141] virtualization:  
	I1027 20:02:45.791930  470518 out.go:179] * [newest-cni-702588] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 20:02:45.796379  470518 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 20:02:45.796489  470518 notify.go:220] Checking for updates...
	I1027 20:02:45.803150  470518 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 20:02:45.806282  470518 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 20:02:45.809281  470518 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-266035/.minikube
	I1027 20:02:45.812543  470518 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 20:02:45.815585  470518 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 20:02:45.819122  470518 config.go:182] Loaded profile config "default-k8s-diff-port-073048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:02:45.819266  470518 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 20:02:45.846690  470518 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 20:02:45.846892  470518 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 20:02:45.912213  470518 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-27 20:02:45.903534386 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 20:02:45.912318  470518 docker.go:318] overlay module found
	I1027 20:02:45.915578  470518 out.go:179] * Using the docker driver based on user configuration
	I1027 20:02:45.918393  470518 start.go:305] selected driver: docker
	I1027 20:02:45.918412  470518 start.go:925] validating driver "docker" against <nil>
	I1027 20:02:45.918427  470518 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 20:02:45.919263  470518 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 20:02:45.975538  470518 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-27 20:02:45.965754848 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 20:02:45.975709  470518 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1027 20:02:45.975741  470518 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1027 20:02:45.975965  470518 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1027 20:02:45.978728  470518 out.go:179] * Using Docker driver with root privileges
	I1027 20:02:45.981758  470518 cni.go:84] Creating CNI manager for ""
	I1027 20:02:45.981845  470518 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 20:02:45.981862  470518 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 20:02:45.981943  470518 start.go:349] cluster config:
	{Name:newest-cni-702588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-702588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 20:02:45.986970  470518 out.go:179] * Starting "newest-cni-702588" primary control-plane node in "newest-cni-702588" cluster
	I1027 20:02:45.989827  470518 cache.go:123] Beginning downloading kic base image for docker with crio
	I1027 20:02:45.992790  470518 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 20:02:45.995596  470518 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 20:02:45.995668  470518 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 20:02:45.995686  470518 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1027 20:02:45.995696  470518 cache.go:58] Caching tarball of preloaded images
	I1027 20:02:45.995795  470518 preload.go:233] Found /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1027 20:02:45.995804  470518 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 20:02:45.995913  470518 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/config.json ...
	I1027 20:02:45.995946  470518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/config.json: {Name:mk88123d02d0184d7c1eca8717c120dcfee3cace Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:02:46.017272  470518 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 20:02:46.017300  470518 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 20:02:46.017320  470518 cache.go:232] Successfully downloaded all kic artifacts
	I1027 20:02:46.017343  470518 start.go:360] acquireMachinesLock for newest-cni-702588: {Name:mkcad9a0641a8c73353a267f147f59ff63030507 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 20:02:46.017466  470518 start.go:364] duration metric: took 91.591µs to acquireMachinesLock for "newest-cni-702588"
	I1027 20:02:46.017500  470518 start.go:93] Provisioning new machine with config: &{Name:newest-cni-702588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-702588 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 20:02:46.017596  470518 start.go:125] createHost starting for "" (driver="docker")
	W1027 20:02:47.567236  466537 node_ready.go:57] node "default-k8s-diff-port-073048" has "Ready":"False" status (will retry)
	W1027 20:02:49.568105  466537 node_ready.go:57] node "default-k8s-diff-port-073048" has "Ready":"False" status (will retry)
	I1027 20:02:46.021834  470518 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1027 20:02:46.022092  470518 start.go:159] libmachine.API.Create for "newest-cni-702588" (driver="docker")
	I1027 20:02:46.022148  470518 client.go:168] LocalClient.Create starting
	I1027 20:02:46.022226  470518 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem
	I1027 20:02:46.022274  470518 main.go:141] libmachine: Decoding PEM data...
	I1027 20:02:46.022290  470518 main.go:141] libmachine: Parsing certificate...
	I1027 20:02:46.022347  470518 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem
	I1027 20:02:46.022369  470518 main.go:141] libmachine: Decoding PEM data...
	I1027 20:02:46.022380  470518 main.go:141] libmachine: Parsing certificate...
	I1027 20:02:46.022768  470518 cli_runner.go:164] Run: docker network inspect newest-cni-702588 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1027 20:02:46.039397  470518 cli_runner.go:211] docker network inspect newest-cni-702588 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1027 20:02:46.039495  470518 network_create.go:284] running [docker network inspect newest-cni-702588] to gather additional debugging logs...
	I1027 20:02:46.039520  470518 cli_runner.go:164] Run: docker network inspect newest-cni-702588
	W1027 20:02:46.055655  470518 cli_runner.go:211] docker network inspect newest-cni-702588 returned with exit code 1
	I1027 20:02:46.055690  470518 network_create.go:287] error running [docker network inspect newest-cni-702588]: docker network inspect newest-cni-702588: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-702588 not found
	I1027 20:02:46.055706  470518 network_create.go:289] output of [docker network inspect newest-cni-702588]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-702588 not found
	
	** /stderr **
	I1027 20:02:46.055871  470518 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 20:02:46.075608  470518 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-74ee89127400 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:aa:c7:29:bd:7a:a9} reservation:<nil>}
	I1027 20:02:46.076304  470518 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-9c57ca829ac8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:0a:24:08:53:d6:07} reservation:<nil>}
	I1027 20:02:46.076757  470518 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b7a7c45fd176 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f6:52:e6:cd:c8:a6} reservation:<nil>}
	I1027 20:02:46.077258  470518 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a07ea0}
	I1027 20:02:46.077280  470518 network_create.go:124] attempt to create docker network newest-cni-702588 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1027 20:02:46.077342  470518 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-702588 newest-cni-702588
	I1027 20:02:46.141479  470518 network_create.go:108] docker network newest-cni-702588 192.168.76.0/24 created
	I1027 20:02:46.141508  470518 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-702588" container
	I1027 20:02:46.141591  470518 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1027 20:02:46.158005  470518 cli_runner.go:164] Run: docker volume create newest-cni-702588 --label name.minikube.sigs.k8s.io=newest-cni-702588 --label created_by.minikube.sigs.k8s.io=true
	I1027 20:02:46.174775  470518 oci.go:103] Successfully created a docker volume newest-cni-702588
	I1027 20:02:46.174870  470518 cli_runner.go:164] Run: docker run --rm --name newest-cni-702588-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-702588 --entrypoint /usr/bin/test -v newest-cni-702588:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1027 20:02:46.737552  470518 oci.go:107] Successfully prepared a docker volume newest-cni-702588
	I1027 20:02:46.737621  470518 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 20:02:46.737644  470518 kic.go:194] Starting extracting preloaded images to volume ...
	I1027 20:02:46.737729  470518 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-702588:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1027 20:02:51.568853  466537 node_ready.go:57] node "default-k8s-diff-port-073048" has "Ready":"False" status (will retry)
	W1027 20:02:54.067221  466537 node_ready.go:57] node "default-k8s-diff-port-073048" has "Ready":"False" status (will retry)
	I1027 20:02:51.191470  470518 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-702588:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.453697221s)
	I1027 20:02:51.191514  470518 kic.go:203] duration metric: took 4.453867103s to extract preloaded images to volume ...
	W1027 20:02:51.191680  470518 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1027 20:02:51.191801  470518 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1027 20:02:51.250675  470518 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-702588 --name newest-cni-702588 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-702588 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-702588 --network newest-cni-702588 --ip 192.168.76.2 --volume newest-cni-702588:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1027 20:02:51.572912  470518 cli_runner.go:164] Run: docker container inspect newest-cni-702588 --format={{.State.Running}}
	I1027 20:02:51.595181  470518 cli_runner.go:164] Run: docker container inspect newest-cni-702588 --format={{.State.Status}}
	I1027 20:02:51.627389  470518 cli_runner.go:164] Run: docker exec newest-cni-702588 stat /var/lib/dpkg/alternatives/iptables
	I1027 20:02:51.685541  470518 oci.go:144] the created container "newest-cni-702588" has a running status.
	I1027 20:02:51.685568  470518 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21801-266035/.minikube/machines/newest-cni-702588/id_rsa...
	I1027 20:02:51.906722  470518 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21801-266035/.minikube/machines/newest-cni-702588/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1027 20:02:51.930734  470518 cli_runner.go:164] Run: docker container inspect newest-cni-702588 --format={{.State.Status}}
	I1027 20:02:51.951290  470518 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1027 20:02:51.951314  470518 kic_runner.go:114] Args: [docker exec --privileged newest-cni-702588 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1027 20:02:52.004192  470518 cli_runner.go:164] Run: docker container inspect newest-cni-702588 --format={{.State.Status}}
	I1027 20:02:52.033952  470518 machine.go:93] provisionDockerMachine start ...
	I1027 20:02:52.034051  470518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-702588
	I1027 20:02:52.063844  470518 main.go:141] libmachine: Using SSH client type: native
	I1027 20:02:52.065561  470518 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1027 20:02:52.065723  470518 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 20:02:52.066772  470518 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1027 20:02:55.222678  470518 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-702588
	
	I1027 20:02:55.222700  470518 ubuntu.go:182] provisioning hostname "newest-cni-702588"
	I1027 20:02:55.222784  470518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-702588
	I1027 20:02:55.240986  470518 main.go:141] libmachine: Using SSH client type: native
	I1027 20:02:55.241292  470518 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1027 20:02:55.241303  470518 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-702588 && echo "newest-cni-702588" | sudo tee /etc/hostname
	I1027 20:02:55.405296  470518 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-702588
	
	I1027 20:02:55.405398  470518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-702588
	I1027 20:02:55.425877  470518 main.go:141] libmachine: Using SSH client type: native
	I1027 20:02:55.426323  470518 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1027 20:02:55.426378  470518 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-702588' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-702588/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-702588' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 20:02:55.579329  470518 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 20:02:55.579401  470518 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21801-266035/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-266035/.minikube}
	I1027 20:02:55.579441  470518 ubuntu.go:190] setting up certificates
	I1027 20:02:55.579483  470518 provision.go:84] configureAuth start
	I1027 20:02:55.579568  470518 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-702588
	I1027 20:02:55.595945  470518 provision.go:143] copyHostCerts
	I1027 20:02:55.596015  470518 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem, removing ...
	I1027 20:02:55.596029  470518 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem
	I1027 20:02:55.596110  470518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem (1078 bytes)
	I1027 20:02:55.596214  470518 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem, removing ...
	I1027 20:02:55.596226  470518 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem
	I1027 20:02:55.596256  470518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem (1123 bytes)
	I1027 20:02:55.596324  470518 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem, removing ...
	I1027 20:02:55.596335  470518 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem
	I1027 20:02:55.596359  470518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem (1675 bytes)
	I1027 20:02:55.596415  470518 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem org=jenkins.newest-cni-702588 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-702588]
	I1027 20:02:55.972333  470518 provision.go:177] copyRemoteCerts
	I1027 20:02:55.972402  470518 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 20:02:55.972448  470518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-702588
	I1027 20:02:55.990336  470518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/newest-cni-702588/id_rsa Username:docker}
	I1027 20:02:56.103301  470518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 20:02:56.121966  470518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 20:02:56.141729  470518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1027 20:02:56.159694  470518 provision.go:87] duration metric: took 580.165595ms to configureAuth
	I1027 20:02:56.159745  470518 ubuntu.go:206] setting minikube options for container-runtime
	I1027 20:02:56.160029  470518 config.go:182] Loaded profile config "newest-cni-702588": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:02:56.160152  470518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-702588
	I1027 20:02:56.177264  470518 main.go:141] libmachine: Using SSH client type: native
	I1027 20:02:56.177664  470518 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1027 20:02:56.177683  470518 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 20:02:56.436658  470518 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 20:02:56.436722  470518 machine.go:96] duration metric: took 4.402751639s to provisionDockerMachine
	I1027 20:02:56.436749  470518 client.go:171] duration metric: took 10.414590673s to LocalClient.Create
	I1027 20:02:56.436776  470518 start.go:167] duration metric: took 10.414685899s to libmachine.API.Create "newest-cni-702588"
	I1027 20:02:56.436809  470518 start.go:293] postStartSetup for "newest-cni-702588" (driver="docker")
	I1027 20:02:56.436838  470518 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 20:02:56.436934  470518 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 20:02:56.436995  470518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-702588
	I1027 20:02:56.463424  470518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/newest-cni-702588/id_rsa Username:docker}
	I1027 20:02:56.571490  470518 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 20:02:56.574600  470518 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 20:02:56.574641  470518 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 20:02:56.574654  470518 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-266035/.minikube/addons for local assets ...
	I1027 20:02:56.574726  470518 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-266035/.minikube/files for local assets ...
	I1027 20:02:56.574819  470518 filesync.go:149] local asset: /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem -> 2678802.pem in /etc/ssl/certs
	I1027 20:02:56.574926  470518 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 20:02:56.582246  470518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem --> /etc/ssl/certs/2678802.pem (1708 bytes)
	I1027 20:02:56.599835  470518 start.go:296] duration metric: took 162.992741ms for postStartSetup
	I1027 20:02:56.600217  470518 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-702588
	I1027 20:02:56.619053  470518 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/config.json ...
	I1027 20:02:56.619357  470518 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 20:02:56.619405  470518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-702588
	I1027 20:02:56.636120  470518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/newest-cni-702588/id_rsa Username:docker}
	I1027 20:02:56.735887  470518 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 20:02:56.740445  470518 start.go:128] duration metric: took 10.722833636s to createHost
	I1027 20:02:56.740470  470518 start.go:83] releasing machines lock for "newest-cni-702588", held for 10.722988479s
	I1027 20:02:56.740539  470518 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-702588
	I1027 20:02:56.756441  470518 ssh_runner.go:195] Run: cat /version.json
	I1027 20:02:56.756541  470518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-702588
	I1027 20:02:56.756546  470518 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 20:02:56.756607  470518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-702588
	I1027 20:02:56.775205  470518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/newest-cni-702588/id_rsa Username:docker}
	I1027 20:02:56.783132  470518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/newest-cni-702588/id_rsa Username:docker}
	I1027 20:02:56.878712  470518 ssh_runner.go:195] Run: systemctl --version
	I1027 20:02:56.967735  470518 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 20:02:57.007063  470518 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 20:02:57.012709  470518 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 20:02:57.012786  470518 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 20:02:57.043379  470518 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1027 20:02:57.043451  470518 start.go:495] detecting cgroup driver to use...
	I1027 20:02:57.043499  470518 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1027 20:02:57.043578  470518 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 20:02:57.061856  470518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 20:02:57.076640  470518 docker.go:218] disabling cri-docker service (if available) ...
	I1027 20:02:57.076722  470518 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 20:02:57.094067  470518 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 20:02:57.113729  470518 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 20:02:57.240224  470518 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 20:02:57.365029  470518 docker.go:234] disabling docker service ...
	I1027 20:02:57.365141  470518 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 20:02:57.387737  470518 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 20:02:57.401018  470518 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 20:02:57.524719  470518 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 20:02:57.641021  470518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 20:02:57.654389  470518 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 20:02:57.677868  470518 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 20:02:57.677977  470518 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:02:57.687502  470518 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 20:02:57.687573  470518 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:02:57.696839  470518 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:02:57.706341  470518 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:02:57.715921  470518 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 20:02:57.724702  470518 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:02:57.733674  470518 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:02:57.746868  470518 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:02:57.755827  470518 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 20:02:57.764173  470518 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 20:02:57.771411  470518 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:02:57.884115  470518 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 20:02:58.008496  470518 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 20:02:58.008629  470518 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 20:02:58.013198  470518 start.go:563] Will wait 60s for crictl version
	I1027 20:02:58.013265  470518 ssh_runner.go:195] Run: which crictl
	I1027 20:02:58.017238  470518 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 20:02:58.042783  470518 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 20:02:58.042873  470518 ssh_runner.go:195] Run: crio --version
	I1027 20:02:58.075048  470518 ssh_runner.go:195] Run: crio --version
	I1027 20:02:58.109676  470518 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 20:02:58.112614  470518 cli_runner.go:164] Run: docker network inspect newest-cni-702588 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 20:02:58.129013  470518 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1027 20:02:58.132766  470518 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 20:02:58.145318  470518 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1027 20:02:56.067694  466537 node_ready.go:57] node "default-k8s-diff-port-073048" has "Ready":"False" status (will retry)
	W1027 20:02:58.068165  466537 node_ready.go:57] node "default-k8s-diff-port-073048" has "Ready":"False" status (will retry)
	W1027 20:03:00.068986  466537 node_ready.go:57] node "default-k8s-diff-port-073048" has "Ready":"False" status (will retry)
	I1027 20:02:58.148122  470518 kubeadm.go:883] updating cluster {Name:newest-cni-702588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-702588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 20:02:58.148271  470518 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 20:02:58.148361  470518 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 20:02:58.181657  470518 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 20:02:58.181678  470518 crio.go:433] Images already preloaded, skipping extraction
	I1027 20:02:58.181732  470518 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 20:02:58.214800  470518 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 20:02:58.214822  470518 cache_images.go:85] Images are preloaded, skipping loading
	I1027 20:02:58.214830  470518 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1027 20:02:58.214950  470518 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-702588 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-702588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 20:02:58.215079  470518 ssh_runner.go:195] Run: crio config
	I1027 20:02:58.286344  470518 cni.go:84] Creating CNI manager for ""
	I1027 20:02:58.286416  470518 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 20:02:58.286459  470518 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1027 20:02:58.286499  470518 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-702588 NodeName:newest-cni-702588 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 20:02:58.286691  470518 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-702588"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 20:02:58.286796  470518 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 20:02:58.294657  470518 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 20:02:58.294765  470518 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 20:02:58.302317  470518 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1027 20:02:58.315914  470518 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 20:02:58.328925  470518 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1027 20:02:58.342200  470518 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1027 20:02:58.345655  470518 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 20:02:58.355109  470518 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:02:58.472732  470518 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 20:02:58.490422  470518 certs.go:69] Setting up /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588 for IP: 192.168.76.2
	I1027 20:02:58.490495  470518 certs.go:195] generating shared ca certs ...
	I1027 20:02:58.490531  470518 certs.go:227] acquiring lock for ca certs: {Name:mk172548fc54811b51040cc0201d01eb4b3dd19b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:02:58.490702  470518 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21801-266035/.minikube/ca.key
	I1027 20:02:58.490783  470518 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.key
	I1027 20:02:58.490818  470518 certs.go:257] generating profile certs ...
	I1027 20:02:58.490900  470518 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/client.key
	I1027 20:02:58.490947  470518 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/client.crt with IP's: []
	I1027 20:02:58.978122  470518 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/client.crt ...
	I1027 20:02:58.978153  470518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/client.crt: {Name:mk2a4be85dd65c523fd79ea6e7981ba2d675e3ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:02:58.978344  470518 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/client.key ...
	I1027 20:02:58.978358  470518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/client.key: {Name:mk1c73e2c4102b4119289b5f83fae52729a6438c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:02:58.978449  470518 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/apiserver.key.585f5f02
	I1027 20:02:58.978466  470518 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/apiserver.crt.585f5f02 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1027 20:02:59.025760  470518 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/apiserver.crt.585f5f02 ...
	I1027 20:02:59.025789  470518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/apiserver.crt.585f5f02: {Name:mka306d141988b7c3e248da0a02dd7daef042114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:02:59.025964  470518 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/apiserver.key.585f5f02 ...
	I1027 20:02:59.025978  470518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/apiserver.key.585f5f02: {Name:mk4c9caea83796ee96b375f57be4ebb1dc60fc1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:02:59.026066  470518 certs.go:382] copying /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/apiserver.crt.585f5f02 -> /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/apiserver.crt
	I1027 20:02:59.026145  470518 certs.go:386] copying /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/apiserver.key.585f5f02 -> /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/apiserver.key
	I1027 20:02:59.026212  470518 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/proxy-client.key
	I1027 20:02:59.026231  470518 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/proxy-client.crt with IP's: []
	I1027 20:02:59.674125  470518 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/proxy-client.crt ...
	I1027 20:02:59.674155  470518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/proxy-client.crt: {Name:mk9281e4c5affe91765a1ef4958a505bd69a3b34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:02:59.674347  470518 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/proxy-client.key ...
	I1027 20:02:59.674363  470518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/proxy-client.key: {Name:mk377369caab2efb73b63f57b628b5a90c74fe25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:02:59.674551  470518 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880.pem (1338 bytes)
	W1027 20:02:59.674594  470518 certs.go:480] ignoring /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880_empty.pem, impossibly tiny 0 bytes
	I1027 20:02:59.674608  470518 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 20:02:59.674633  470518 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem (1078 bytes)
	I1027 20:02:59.674660  470518 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem (1123 bytes)
	I1027 20:02:59.674684  470518 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem (1675 bytes)
	I1027 20:02:59.674730  470518 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem (1708 bytes)
	I1027 20:02:59.675360  470518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 20:02:59.695470  470518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 20:02:59.714902  470518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 20:02:59.734632  470518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 20:02:59.753167  470518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1027 20:02:59.772811  470518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 20:02:59.790847  470518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 20:02:59.810227  470518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 20:02:59.827972  470518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880.pem --> /usr/share/ca-certificates/267880.pem (1338 bytes)
	I1027 20:02:59.846040  470518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem --> /usr/share/ca-certificates/2678802.pem (1708 bytes)
	I1027 20:02:59.863574  470518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 20:02:59.882829  470518 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 20:02:59.895829  470518 ssh_runner.go:195] Run: openssl version
	I1027 20:02:59.902038  470518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/267880.pem && ln -fs /usr/share/ca-certificates/267880.pem /etc/ssl/certs/267880.pem"
	I1027 20:02:59.910215  470518 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/267880.pem
	I1027 20:02:59.915495  470518 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 19:04 /usr/share/ca-certificates/267880.pem
	I1027 20:02:59.915613  470518 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/267880.pem
	I1027 20:02:59.957262  470518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/267880.pem /etc/ssl/certs/51391683.0"
	I1027 20:02:59.965553  470518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2678802.pem && ln -fs /usr/share/ca-certificates/2678802.pem /etc/ssl/certs/2678802.pem"
	I1027 20:02:59.973736  470518 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2678802.pem
	I1027 20:02:59.977126  470518 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 19:04 /usr/share/ca-certificates/2678802.pem
	I1027 20:02:59.977237  470518 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2678802.pem
	I1027 20:03:00.019985  470518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2678802.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 20:03:00.031660  470518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 20:03:00.044149  470518 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:03:00.049549  470518 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:03:00.049719  470518 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:03:00.147420  470518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 20:03:00.174266  470518 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 20:03:00.179958  470518 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 20:03:00.180085  470518 kubeadm.go:400] StartCluster: {Name:newest-cni-702588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-702588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 20:03:00.180256  470518 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 20:03:00.180364  470518 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 20:03:00.267055  470518 cri.go:89] found id: ""
	I1027 20:03:00.267153  470518 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 20:03:00.312184  470518 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 20:03:00.360263  470518 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1027 20:03:00.360350  470518 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 20:03:00.397656  470518 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 20:03:00.397674  470518 kubeadm.go:157] found existing configuration files:
	
	I1027 20:03:00.397745  470518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 20:03:00.427842  470518 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 20:03:00.427954  470518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 20:03:00.440940  470518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 20:03:00.456111  470518 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 20:03:00.456241  470518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 20:03:00.467906  470518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 20:03:00.479594  470518 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 20:03:00.479750  470518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 20:03:00.491415  470518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 20:03:00.502295  470518 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 20:03:00.502401  470518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 20:03:00.512777  470518 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 20:03:00.566927  470518 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1027 20:03:00.567306  470518 kubeadm.go:318] [preflight] Running pre-flight checks
	I1027 20:03:00.603122  470518 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1027 20:03:00.603251  470518 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1027 20:03:00.603323  470518 kubeadm.go:318] OS: Linux
	I1027 20:03:00.603403  470518 kubeadm.go:318] CGROUPS_CPU: enabled
	I1027 20:03:00.603489  470518 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1027 20:03:00.603573  470518 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1027 20:03:00.603654  470518 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1027 20:03:00.603714  470518 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1027 20:03:00.603771  470518 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1027 20:03:00.603826  470518 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1027 20:03:00.603900  470518 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1027 20:03:00.603959  470518 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1027 20:03:00.697183  470518 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 20:03:00.697301  470518 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 20:03:00.697401  470518 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 20:03:00.705858  470518 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 20:03:00.711446  470518 out.go:252]   - Generating certificates and keys ...
	I1027 20:03:00.711558  470518 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1027 20:03:00.711646  470518 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	W1027 20:03:02.567799  466537 node_ready.go:57] node "default-k8s-diff-port-073048" has "Ready":"False" status (will retry)
	W1027 20:03:05.069096  466537 node_ready.go:57] node "default-k8s-diff-port-073048" has "Ready":"False" status (will retry)
	I1027 20:03:01.769019  470518 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 20:03:02.074071  470518 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1027 20:03:02.832146  470518 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1027 20:03:03.617058  470518 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1027 20:03:03.921467  470518 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1027 20:03:03.921949  470518 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-702588] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1027 20:03:04.503315  470518 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1027 20:03:04.503524  470518 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-702588] and IPs [192.168.76.2 127.0.0.1 ::1]
	W1027 20:03:07.569270  466537 node_ready.go:57] node "default-k8s-diff-port-073048" has "Ready":"False" status (will retry)
	W1027 20:03:10.067868  466537 node_ready.go:57] node "default-k8s-diff-port-073048" has "Ready":"False" status (will retry)
	I1027 20:03:05.845312  470518 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 20:03:07.409001  470518 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 20:03:08.353102  470518 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1027 20:03:08.353425  470518 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 20:03:09.162948  470518 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 20:03:09.800753  470518 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 20:03:09.981966  470518 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 20:03:10.108623  470518 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 20:03:10.713603  470518 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 20:03:10.714213  470518 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 20:03:10.716889  470518 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 20:03:10.720135  470518 out.go:252]   - Booting up control plane ...
	I1027 20:03:10.720234  470518 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 20:03:10.720314  470518 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 20:03:10.721825  470518 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 20:03:10.748215  470518 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 20:03:10.748484  470518 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 20:03:10.756529  470518 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 20:03:10.756776  470518 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 20:03:10.756966  470518 kubeadm.go:318] [kubelet-start] Starting the kubelet
	W1027 20:03:12.068273  466537 node_ready.go:57] node "default-k8s-diff-port-073048" has "Ready":"False" status (will retry)
	W1027 20:03:14.567263  466537 node_ready.go:57] node "default-k8s-diff-port-073048" has "Ready":"False" status (will retry)
	I1027 20:03:10.880506  470518 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 20:03:10.880637  470518 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 20:03:12.883422  470518 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.000880783s
	I1027 20:03:12.884877  470518 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 20:03:12.884990  470518 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1027 20:03:12.885103  470518 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 20:03:12.885191  470518 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 20:03:15.567604  466537 node_ready.go:49] node "default-k8s-diff-port-073048" is "Ready"
	I1027 20:03:15.567644  466537 node_ready.go:38] duration metric: took 41.00349104s for node "default-k8s-diff-port-073048" to be "Ready" ...
	I1027 20:03:15.567660  466537 api_server.go:52] waiting for apiserver process to appear ...
	I1027 20:03:15.567721  466537 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 20:03:15.601316  466537 api_server.go:72] duration metric: took 41.979225195s to wait for apiserver process to appear ...
	I1027 20:03:15.601337  466537 api_server.go:88] waiting for apiserver healthz status ...
	I1027 20:03:15.601355  466537 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1027 20:03:15.631459  466537 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1027 20:03:15.632598  466537 api_server.go:141] control plane version: v1.34.1
	I1027 20:03:15.632618  466537 api_server.go:131] duration metric: took 31.27399ms to wait for apiserver health ...
	I1027 20:03:15.632626  466537 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 20:03:15.637564  466537 system_pods.go:59] 8 kube-system pods found
	I1027 20:03:15.637593  466537 system_pods.go:61] "coredns-66bc5c9577-6vc9v" [5d420b85-b106-4d91-9ebd-483f8ccfa445] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:03:15.637607  466537 system_pods.go:61] "etcd-default-k8s-diff-port-073048" [13538216-46ea-4c98-a89b-a4d0be59d38b] Running
	I1027 20:03:15.637613  466537 system_pods.go:61] "kindnet-qc8zw" [10158916-6994-4c41-ba7d-e5bd80a7fd56] Running
	I1027 20:03:15.637617  466537 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-073048" [7bb63b28-d55e-4f2f-bd52-0dbce0ee5858] Running
	I1027 20:03:15.637622  466537 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-073048" [502a218b-490e-47f8-b946-6d2df9e30913] Running
	I1027 20:03:15.637626  466537 system_pods.go:61] "kube-proxy-dsq46" [91a97ff3-0f9b-41c0-bbec-870515448861] Running
	I1027 20:03:15.637630  466537 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-073048" [b090cbea-1aa4-46ff-a144-7a93a8fdeece] Running
	I1027 20:03:15.637636  466537 system_pods.go:61] "storage-provisioner" [9fef6eaa-5369-4f2d-9dd1-f3d7074b9a77] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 20:03:15.637643  466537 system_pods.go:74] duration metric: took 5.01019ms to wait for pod list to return data ...
	I1027 20:03:15.637651  466537 default_sa.go:34] waiting for default service account to be created ...
	I1027 20:03:15.640036  466537 default_sa.go:45] found service account: "default"
	I1027 20:03:15.640086  466537 default_sa.go:55] duration metric: took 2.429626ms for default service account to be created ...
	I1027 20:03:15.640109  466537 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 20:03:15.643233  466537 system_pods.go:86] 8 kube-system pods found
	I1027 20:03:15.643294  466537 system_pods.go:89] "coredns-66bc5c9577-6vc9v" [5d420b85-b106-4d91-9ebd-483f8ccfa445] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:03:15.643315  466537 system_pods.go:89] "etcd-default-k8s-diff-port-073048" [13538216-46ea-4c98-a89b-a4d0be59d38b] Running
	I1027 20:03:15.643339  466537 system_pods.go:89] "kindnet-qc8zw" [10158916-6994-4c41-ba7d-e5bd80a7fd56] Running
	I1027 20:03:15.643384  466537 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-073048" [7bb63b28-d55e-4f2f-bd52-0dbce0ee5858] Running
	I1027 20:03:15.643409  466537 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-073048" [502a218b-490e-47f8-b946-6d2df9e30913] Running
	I1027 20:03:15.643431  466537 system_pods.go:89] "kube-proxy-dsq46" [91a97ff3-0f9b-41c0-bbec-870515448861] Running
	I1027 20:03:15.643452  466537 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-073048" [b090cbea-1aa4-46ff-a144-7a93a8fdeece] Running
	I1027 20:03:15.643488  466537 system_pods.go:89] "storage-provisioner" [9fef6eaa-5369-4f2d-9dd1-f3d7074b9a77] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 20:03:15.643540  466537 retry.go:31] will retry after 235.50127ms: missing components: kube-dns
	I1027 20:03:15.928665  466537 system_pods.go:86] 8 kube-system pods found
	I1027 20:03:15.928754  466537 system_pods.go:89] "coredns-66bc5c9577-6vc9v" [5d420b85-b106-4d91-9ebd-483f8ccfa445] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:03:15.928778  466537 system_pods.go:89] "etcd-default-k8s-diff-port-073048" [13538216-46ea-4c98-a89b-a4d0be59d38b] Running
	I1027 20:03:15.928818  466537 system_pods.go:89] "kindnet-qc8zw" [10158916-6994-4c41-ba7d-e5bd80a7fd56] Running
	I1027 20:03:15.928845  466537 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-073048" [7bb63b28-d55e-4f2f-bd52-0dbce0ee5858] Running
	I1027 20:03:15.928866  466537 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-073048" [502a218b-490e-47f8-b946-6d2df9e30913] Running
	I1027 20:03:15.928888  466537 system_pods.go:89] "kube-proxy-dsq46" [91a97ff3-0f9b-41c0-bbec-870515448861] Running
	I1027 20:03:15.928922  466537 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-073048" [b090cbea-1aa4-46ff-a144-7a93a8fdeece] Running
	I1027 20:03:15.928946  466537 system_pods.go:89] "storage-provisioner" [9fef6eaa-5369-4f2d-9dd1-f3d7074b9a77] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 20:03:15.928975  466537 retry.go:31] will retry after 314.055603ms: missing components: kube-dns
	I1027 20:03:16.248749  466537 system_pods.go:86] 8 kube-system pods found
	I1027 20:03:16.254584  466537 system_pods.go:89] "coredns-66bc5c9577-6vc9v" [5d420b85-b106-4d91-9ebd-483f8ccfa445] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:03:16.254607  466537 system_pods.go:89] "etcd-default-k8s-diff-port-073048" [13538216-46ea-4c98-a89b-a4d0be59d38b] Running
	I1027 20:03:16.254618  466537 system_pods.go:89] "kindnet-qc8zw" [10158916-6994-4c41-ba7d-e5bd80a7fd56] Running
	I1027 20:03:16.254624  466537 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-073048" [7bb63b28-d55e-4f2f-bd52-0dbce0ee5858] Running
	I1027 20:03:16.254629  466537 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-073048" [502a218b-490e-47f8-b946-6d2df9e30913] Running
	I1027 20:03:16.254634  466537 system_pods.go:89] "kube-proxy-dsq46" [91a97ff3-0f9b-41c0-bbec-870515448861] Running
	I1027 20:03:16.254639  466537 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-073048" [b090cbea-1aa4-46ff-a144-7a93a8fdeece] Running
	I1027 20:03:16.254645  466537 system_pods.go:89] "storage-provisioner" [9fef6eaa-5369-4f2d-9dd1-f3d7074b9a77] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 20:03:16.254664  466537 retry.go:31] will retry after 482.287114ms: missing components: kube-dns
	I1027 20:03:16.740562  466537 system_pods.go:86] 8 kube-system pods found
	I1027 20:03:16.740604  466537 system_pods.go:89] "coredns-66bc5c9577-6vc9v" [5d420b85-b106-4d91-9ebd-483f8ccfa445] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:03:16.740612  466537 system_pods.go:89] "etcd-default-k8s-diff-port-073048" [13538216-46ea-4c98-a89b-a4d0be59d38b] Running
	I1027 20:03:16.740619  466537 system_pods.go:89] "kindnet-qc8zw" [10158916-6994-4c41-ba7d-e5bd80a7fd56] Running
	I1027 20:03:16.740624  466537 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-073048" [7bb63b28-d55e-4f2f-bd52-0dbce0ee5858] Running
	I1027 20:03:16.740629  466537 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-073048" [502a218b-490e-47f8-b946-6d2df9e30913] Running
	I1027 20:03:16.740633  466537 system_pods.go:89] "kube-proxy-dsq46" [91a97ff3-0f9b-41c0-bbec-870515448861] Running
	I1027 20:03:16.740643  466537 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-073048" [b090cbea-1aa4-46ff-a144-7a93a8fdeece] Running
	I1027 20:03:16.740650  466537 system_pods.go:89] "storage-provisioner" [9fef6eaa-5369-4f2d-9dd1-f3d7074b9a77] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 20:03:16.740671  466537 retry.go:31] will retry after 516.518808ms: missing components: kube-dns
	I1027 20:03:17.261626  466537 system_pods.go:86] 8 kube-system pods found
	I1027 20:03:17.261661  466537 system_pods.go:89] "coredns-66bc5c9577-6vc9v" [5d420b85-b106-4d91-9ebd-483f8ccfa445] Running
	I1027 20:03:17.261668  466537 system_pods.go:89] "etcd-default-k8s-diff-port-073048" [13538216-46ea-4c98-a89b-a4d0be59d38b] Running
	I1027 20:03:17.261676  466537 system_pods.go:89] "kindnet-qc8zw" [10158916-6994-4c41-ba7d-e5bd80a7fd56] Running
	I1027 20:03:17.261681  466537 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-073048" [7bb63b28-d55e-4f2f-bd52-0dbce0ee5858] Running
	I1027 20:03:17.261685  466537 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-073048" [502a218b-490e-47f8-b946-6d2df9e30913] Running
	I1027 20:03:17.261689  466537 system_pods.go:89] "kube-proxy-dsq46" [91a97ff3-0f9b-41c0-bbec-870515448861] Running
	I1027 20:03:17.261694  466537 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-073048" [b090cbea-1aa4-46ff-a144-7a93a8fdeece] Running
	I1027 20:03:17.261701  466537 system_pods.go:89] "storage-provisioner" [9fef6eaa-5369-4f2d-9dd1-f3d7074b9a77] Running
	I1027 20:03:17.261712  466537 system_pods.go:126] duration metric: took 1.621584317s to wait for k8s-apps to be running ...
	I1027 20:03:17.261728  466537 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 20:03:17.261787  466537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 20:03:17.282881  466537 system_svc.go:56] duration metric: took 21.14395ms WaitForService to wait for kubelet
	I1027 20:03:17.282910  466537 kubeadm.go:586] duration metric: took 43.660826304s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 20:03:17.282929  466537 node_conditions.go:102] verifying NodePressure condition ...
	I1027 20:03:17.286165  466537 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 20:03:17.286203  466537 node_conditions.go:123] node cpu capacity is 2
	I1027 20:03:17.286216  466537 node_conditions.go:105] duration metric: took 3.281549ms to run NodePressure ...
	I1027 20:03:17.286228  466537 start.go:241] waiting for startup goroutines ...
	I1027 20:03:17.286236  466537 start.go:246] waiting for cluster config update ...
	I1027 20:03:17.286254  466537 start.go:255] writing updated cluster config ...
	I1027 20:03:17.286560  466537 ssh_runner.go:195] Run: rm -f paused
	I1027 20:03:17.293004  466537 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 20:03:17.296662  466537 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6vc9v" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:03:17.303505  466537 pod_ready.go:94] pod "coredns-66bc5c9577-6vc9v" is "Ready"
	I1027 20:03:17.303533  466537 pod_ready.go:86] duration metric: took 6.834508ms for pod "coredns-66bc5c9577-6vc9v" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:03:17.305863  466537 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-073048" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:03:17.310685  466537 pod_ready.go:94] pod "etcd-default-k8s-diff-port-073048" is "Ready"
	I1027 20:03:17.310719  466537 pod_ready.go:86] duration metric: took 4.832497ms for pod "etcd-default-k8s-diff-port-073048" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:03:17.313103  466537 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-073048" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:03:17.317546  466537 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-073048" is "Ready"
	I1027 20:03:17.317578  466537 pod_ready.go:86] duration metric: took 4.440516ms for pod "kube-apiserver-default-k8s-diff-port-073048" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:03:17.319868  466537 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-073048" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:03:17.697637  466537 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-073048" is "Ready"
	I1027 20:03:17.697680  466537 pod_ready.go:86] duration metric: took 377.777787ms for pod "kube-controller-manager-default-k8s-diff-port-073048" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:03:17.897708  466537 pod_ready.go:83] waiting for pod "kube-proxy-dsq46" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:03:18.297085  466537 pod_ready.go:94] pod "kube-proxy-dsq46" is "Ready"
	I1027 20:03:18.297159  466537 pod_ready.go:86] duration metric: took 399.42515ms for pod "kube-proxy-dsq46" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:03:18.497927  466537 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-073048" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:03:18.897735  466537 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-073048" is "Ready"
	I1027 20:03:18.897762  466537 pod_ready.go:86] duration metric: took 399.768811ms for pod "kube-scheduler-default-k8s-diff-port-073048" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:03:18.897776  466537 pod_ready.go:40] duration metric: took 1.604735913s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 20:03:18.976577  466537 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 20:03:18.980080  466537 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-073048" cluster and "default" namespace by default
	I1027 20:03:16.103784  470518 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.217753414s
	I1027 20:03:18.665470  470518 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.780574885s
	I1027 20:03:20.386640  470518 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.501682037s
	I1027 20:03:20.409932  470518 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 20:03:20.423437  470518 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 20:03:20.441572  470518 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 20:03:20.441781  470518 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-702588 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 20:03:20.454775  470518 kubeadm.go:318] [bootstrap-token] Using token: mzl0zn.i55e3nx87rh2mbwp
	I1027 20:03:20.458078  470518 out.go:252]   - Configuring RBAC rules ...
	I1027 20:03:20.458204  470518 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 20:03:20.470453  470518 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 20:03:20.478853  470518 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 20:03:20.484430  470518 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 20:03:20.489111  470518 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 20:03:20.495563  470518 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 20:03:20.793244  470518 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 20:03:21.228645  470518 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1027 20:03:21.793541  470518 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1027 20:03:21.794622  470518 kubeadm.go:318] 
	I1027 20:03:21.794695  470518 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1027 20:03:21.794709  470518 kubeadm.go:318] 
	I1027 20:03:21.794786  470518 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1027 20:03:21.794794  470518 kubeadm.go:318] 
	I1027 20:03:21.794819  470518 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1027 20:03:21.794881  470518 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 20:03:21.794934  470518 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 20:03:21.794943  470518 kubeadm.go:318] 
	I1027 20:03:21.795024  470518 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1027 20:03:21.795034  470518 kubeadm.go:318] 
	I1027 20:03:21.795081  470518 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 20:03:21.795085  470518 kubeadm.go:318] 
	I1027 20:03:21.795137  470518 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1027 20:03:21.795216  470518 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 20:03:21.795284  470518 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 20:03:21.795289  470518 kubeadm.go:318] 
	I1027 20:03:21.795371  470518 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 20:03:21.795447  470518 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1027 20:03:21.795452  470518 kubeadm.go:318] 
	I1027 20:03:21.795535  470518 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token mzl0zn.i55e3nx87rh2mbwp \
	I1027 20:03:21.795660  470518 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:9473f077605f6866eb08c8a712af7c56a0deb8dc2a907ec8cbb4b8ae396e8f1f \
	I1027 20:03:21.795686  470518 kubeadm.go:318] 	--control-plane 
	I1027 20:03:21.795691  470518 kubeadm.go:318] 
	I1027 20:03:21.795774  470518 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1027 20:03:21.795778  470518 kubeadm.go:318] 
	I1027 20:03:21.795858  470518 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token mzl0zn.i55e3nx87rh2mbwp \
	I1027 20:03:21.795959  470518 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:9473f077605f6866eb08c8a712af7c56a0deb8dc2a907ec8cbb4b8ae396e8f1f 
	I1027 20:03:21.800525  470518 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1027 20:03:21.800774  470518 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1027 20:03:21.800920  470518 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 20:03:21.800939  470518 cni.go:84] Creating CNI manager for ""
	I1027 20:03:21.800947  470518 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 20:03:21.804256  470518 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1027 20:03:21.808092  470518 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1027 20:03:21.812849  470518 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 20:03:21.812877  470518 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1027 20:03:21.827464  470518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 20:03:22.159276  470518 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 20:03:22.159358  470518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:03:22.159408  470518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-702588 minikube.k8s.io/updated_at=2025_10_27T20_03_22_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f minikube.k8s.io/name=newest-cni-702588 minikube.k8s.io/primary=true
	I1027 20:03:22.331502  470518 ops.go:34] apiserver oom_adj: -16
	I1027 20:03:22.331604  470518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:03:22.832139  470518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:03:23.331776  470518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:03:23.832212  470518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:03:24.332196  470518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:03:24.832188  470518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:03:25.332355  470518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:03:25.831912  470518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:03:25.934837  470518 kubeadm.go:1113] duration metric: took 3.77554406s to wait for elevateKubeSystemPrivileges
	I1027 20:03:25.934864  470518 kubeadm.go:402] duration metric: took 25.754783149s to StartCluster
	I1027 20:03:25.934891  470518 settings.go:142] acquiring lock: {Name:mk49b9e2e9b7901c6b51e89b8b1e8ee7f9e88107 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:03:25.934951  470518 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 20:03:25.936013  470518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/kubeconfig: {Name:mk0a03a594f8d9f9199b1c7cff7c550cec8414c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:03:25.936238  470518 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 20:03:25.936392  470518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 20:03:25.936579  470518 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 20:03:25.936649  470518 config.go:182] Loaded profile config "newest-cni-702588": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:03:25.936655  470518 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-702588"
	I1027 20:03:25.936679  470518 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-702588"
	I1027 20:03:25.936682  470518 addons.go:69] Setting default-storageclass=true in profile "newest-cni-702588"
	I1027 20:03:25.936695  470518 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-702588"
	I1027 20:03:25.936706  470518 host.go:66] Checking if "newest-cni-702588" exists ...
	I1027 20:03:25.937000  470518 cli_runner.go:164] Run: docker container inspect newest-cni-702588 --format={{.State.Status}}
	I1027 20:03:25.937156  470518 cli_runner.go:164] Run: docker container inspect newest-cni-702588 --format={{.State.Status}}
	I1027 20:03:25.940880  470518 out.go:179] * Verifying Kubernetes components...
	I1027 20:03:25.943729  470518 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:03:25.974806  470518 addons.go:238] Setting addon default-storageclass=true in "newest-cni-702588"
	I1027 20:03:25.974845  470518 host.go:66] Checking if "newest-cni-702588" exists ...
	I1027 20:03:25.975286  470518 cli_runner.go:164] Run: docker container inspect newest-cni-702588 --format={{.State.Status}}
	I1027 20:03:25.986005  470518 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 20:03:25.991152  470518 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 20:03:25.991188  470518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 20:03:25.991259  470518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-702588
	I1027 20:03:26.009986  470518 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 20:03:26.010024  470518 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 20:03:26.010089  470518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-702588
	I1027 20:03:26.047558  470518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/newest-cni-702588/id_rsa Username:docker}
	I1027 20:03:26.049020  470518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/newest-cni-702588/id_rsa Username:docker}
	I1027 20:03:26.214132  470518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 20:03:26.246823  470518 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 20:03:26.293737  470518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 20:03:26.348863  470518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 20:03:26.793545  470518 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1027 20:03:26.795562  470518 api_server.go:52] waiting for apiserver process to appear ...
	I1027 20:03:26.795638  470518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 20:03:27.105561  470518 api_server.go:72] duration metric: took 1.169298134s to wait for apiserver process to appear ...
	I1027 20:03:27.105582  470518 api_server.go:88] waiting for apiserver healthz status ...
	I1027 20:03:27.105610  470518 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 20:03:27.120716  470518 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1027 20:03:27.122229  470518 api_server.go:141] control plane version: v1.34.1
	I1027 20:03:27.122256  470518 api_server.go:131] duration metric: took 16.665708ms to wait for apiserver health ...
	I1027 20:03:27.122266  470518 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 20:03:27.122631  470518 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1027 20:03:27.125474  470518 addons.go:514] duration metric: took 1.188884695s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1027 20:03:27.126448  470518 system_pods.go:59] 8 kube-system pods found
	I1027 20:03:27.126483  470518 system_pods.go:61] "coredns-66bc5c9577-xclwd" [eee638fa-65a2-4c75-ba2c-7615f09c51da] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1027 20:03:27.126492  470518 system_pods.go:61] "etcd-newest-cni-702588" [84702404-c34c-450f-a8c7-f94b0088ac21] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 20:03:27.126501  470518 system_pods.go:61] "kindnet-7ctmm" [98e70164-cd51-4563-91d0-7c0bae3c2ade] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1027 20:03:27.126513  470518 system_pods.go:61] "kube-apiserver-newest-cni-702588" [e508c926-b287-4ae8-83a6-a1a4360c85f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 20:03:27.126525  470518 system_pods.go:61] "kube-controller-manager-newest-cni-702588" [01fa6132-66de-422f-bbd3-2c1e46280199] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 20:03:27.126532  470518 system_pods.go:61] "kube-proxy-k9lhg" [f36ed32e-d331-485d-ba07-01353f65e231] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1027 20:03:27.126541  470518 system_pods.go:61] "kube-scheduler-newest-cni-702588" [6089c80f-86d4-4837-9eaf-2e473ed151d5] Running
	I1027 20:03:27.126547  470518 system_pods.go:61] "storage-provisioner" [9074befc-b06a-4ae1-8cf5-5544c94b2e07] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1027 20:03:27.126552  470518 system_pods.go:74] duration metric: took 4.281242ms to wait for pod list to return data ...
	I1027 20:03:27.126559  470518 default_sa.go:34] waiting for default service account to be created ...
	I1027 20:03:27.128894  470518 default_sa.go:45] found service account: "default"
	I1027 20:03:27.128917  470518 default_sa.go:55] duration metric: took 2.352237ms for default service account to be created ...
	I1027 20:03:27.128927  470518 kubeadm.go:586] duration metric: took 1.192668164s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1027 20:03:27.128942  470518 node_conditions.go:102] verifying NodePressure condition ...
	I1027 20:03:27.131355  470518 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 20:03:27.131386  470518 node_conditions.go:123] node cpu capacity is 2
	I1027 20:03:27.131398  470518 node_conditions.go:105] duration metric: took 2.449942ms to run NodePressure ...
	I1027 20:03:27.131409  470518 start.go:241] waiting for startup goroutines ...
	I1027 20:03:27.299797  470518 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-702588" context rescaled to 1 replicas
	I1027 20:03:27.299831  470518 start.go:246] waiting for cluster config update ...
	I1027 20:03:27.299842  470518 start.go:255] writing updated cluster config ...
	I1027 20:03:27.300139  470518 ssh_runner.go:195] Run: rm -f paused
	I1027 20:03:27.413635  470518 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 20:03:27.416916  470518 out.go:179] * Done! kubectl is now configured to use "newest-cni-702588" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 27 20:03:15 default-k8s-diff-port-073048 crio[838]: time="2025-10-27T20:03:15.84202101Z" level=info msg="Created container 85a0cd12f7bae6ff28418619d144de140b817113f6c88788ab2809812fad422a: kube-system/coredns-66bc5c9577-6vc9v/coredns" id=009f9a23-9ac1-46cc-91f6-b29f395d2c0e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 20:03:15 default-k8s-diff-port-073048 crio[838]: time="2025-10-27T20:03:15.84295088Z" level=info msg="Starting container: 85a0cd12f7bae6ff28418619d144de140b817113f6c88788ab2809812fad422a" id=02ddac0c-f490-4715-8cd3-e9d30828bfed name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 20:03:15 default-k8s-diff-port-073048 crio[838]: time="2025-10-27T20:03:15.856596595Z" level=info msg="Started container" PID=1727 containerID=85a0cd12f7bae6ff28418619d144de140b817113f6c88788ab2809812fad422a description=kube-system/coredns-66bc5c9577-6vc9v/coredns id=02ddac0c-f490-4715-8cd3-e9d30828bfed name=/runtime.v1.RuntimeService/StartContainer sandboxID=b742f7f52205c62e6ba85c3d494d998de8e2456a7f180b969b87f03c7e237334
	Oct 27 20:03:19 default-k8s-diff-port-073048 crio[838]: time="2025-10-27T20:03:19.55593968Z" level=info msg="Running pod sandbox: default/busybox/POD" id=7107d0b1-c50c-4f82-9dd9-e49343639db3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 20:03:19 default-k8s-diff-port-073048 crio[838]: time="2025-10-27T20:03:19.5560353Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:03:19 default-k8s-diff-port-073048 crio[838]: time="2025-10-27T20:03:19.569784388Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:df6ffb28802a4ecc2498cd9f84adfa8c3cbd7c7f26b740e21c61f9f915fb6223 UID:53db98e8-ffba-4a6b-b0b4-8145690263ae NetNS:/var/run/netns/e187b822-bd6a-4947-b07b-d92791d61583 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079748}] Aliases:map[]}"
	Oct 27 20:03:19 default-k8s-diff-port-073048 crio[838]: time="2025-10-27T20:03:19.569954501Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 27 20:03:19 default-k8s-diff-port-073048 crio[838]: time="2025-10-27T20:03:19.603241308Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:df6ffb28802a4ecc2498cd9f84adfa8c3cbd7c7f26b740e21c61f9f915fb6223 UID:53db98e8-ffba-4a6b-b0b4-8145690263ae NetNS:/var/run/netns/e187b822-bd6a-4947-b07b-d92791d61583 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079748}] Aliases:map[]}"
	Oct 27 20:03:19 default-k8s-diff-port-073048 crio[838]: time="2025-10-27T20:03:19.603572753Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 27 20:03:19 default-k8s-diff-port-073048 crio[838]: time="2025-10-27T20:03:19.611401079Z" level=info msg="Ran pod sandbox df6ffb28802a4ecc2498cd9f84adfa8c3cbd7c7f26b740e21c61f9f915fb6223 with infra container: default/busybox/POD" id=7107d0b1-c50c-4f82-9dd9-e49343639db3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 20:03:19 default-k8s-diff-port-073048 crio[838]: time="2025-10-27T20:03:19.61293926Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9c75e5be-425d-48e0-88d8-80483ebe5382 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 20:03:19 default-k8s-diff-port-073048 crio[838]: time="2025-10-27T20:03:19.613194982Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=9c75e5be-425d-48e0-88d8-80483ebe5382 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 20:03:19 default-k8s-diff-port-073048 crio[838]: time="2025-10-27T20:03:19.613316439Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=9c75e5be-425d-48e0-88d8-80483ebe5382 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 20:03:19 default-k8s-diff-port-073048 crio[838]: time="2025-10-27T20:03:19.614417496Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b7543964-3c46-4f26-bf15-df598c3b3fa9 name=/runtime.v1.ImageService/PullImage
	Oct 27 20:03:19 default-k8s-diff-port-073048 crio[838]: time="2025-10-27T20:03:19.618212836Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 27 20:03:21 default-k8s-diff-port-073048 crio[838]: time="2025-10-27T20:03:21.676772316Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=b7543964-3c46-4f26-bf15-df598c3b3fa9 name=/runtime.v1.ImageService/PullImage
	Oct 27 20:03:21 default-k8s-diff-port-073048 crio[838]: time="2025-10-27T20:03:21.677760317Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a478979a-26d6-49aa-8cc9-d8f6db88a602 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 20:03:21 default-k8s-diff-port-073048 crio[838]: time="2025-10-27T20:03:21.679369191Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=25cb8691-b7e7-432a-8e92-ca6a4701a6d1 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 20:03:21 default-k8s-diff-port-073048 crio[838]: time="2025-10-27T20:03:21.684675758Z" level=info msg="Creating container: default/busybox/busybox" id=dabbd3c9-fe86-4098-b2bf-15f2ccb69347 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 20:03:21 default-k8s-diff-port-073048 crio[838]: time="2025-10-27T20:03:21.684801441Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:03:21 default-k8s-diff-port-073048 crio[838]: time="2025-10-27T20:03:21.693143969Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:03:21 default-k8s-diff-port-073048 crio[838]: time="2025-10-27T20:03:21.69363504Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:03:21 default-k8s-diff-port-073048 crio[838]: time="2025-10-27T20:03:21.70840385Z" level=info msg="Created container fe1b8555726aad824e33e3d6cbe064a1f6e957515b41206d0725c147ac75a989: default/busybox/busybox" id=dabbd3c9-fe86-4098-b2bf-15f2ccb69347 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 20:03:21 default-k8s-diff-port-073048 crio[838]: time="2025-10-27T20:03:21.709522293Z" level=info msg="Starting container: fe1b8555726aad824e33e3d6cbe064a1f6e957515b41206d0725c147ac75a989" id=5a1d6e51-6158-4894-af01-3e73aa7ef1ff name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 20:03:21 default-k8s-diff-port-073048 crio[838]: time="2025-10-27T20:03:21.712647522Z" level=info msg="Started container" PID=1781 containerID=fe1b8555726aad824e33e3d6cbe064a1f6e957515b41206d0725c147ac75a989 description=default/busybox/busybox id=5a1d6e51-6158-4894-af01-3e73aa7ef1ff name=/runtime.v1.RuntimeService/StartContainer sandboxID=df6ffb28802a4ecc2498cd9f84adfa8c3cbd7c7f26b740e21c61f9f915fb6223
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	fe1b8555726aa       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago        Running             busybox                   0                   df6ffb28802a4       busybox                                                default
	85a0cd12f7bae       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago       Running             coredns                   0                   b742f7f52205c       coredns-66bc5c9577-6vc9v                               kube-system
	9a36c76fe6a58       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago       Running             storage-provisioner       0                   a72d3b848d58a       storage-provisioner                                    kube-system
	3f7d47857b94a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      54 seconds ago       Running             kube-proxy                0                   c3f261fdfdbfa       kube-proxy-dsq46                                       kube-system
	d23ab58b85e00       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      55 seconds ago       Running             kindnet-cni               0                   38c01665d7938       kindnet-qc8zw                                          kube-system
	54166af94f787       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   3e06976685ab2       kube-controller-manager-default-k8s-diff-port-073048   kube-system
	12e7fd3d4126e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   78985bf59369d       kube-scheduler-default-k8s-diff-port-073048            kube-system
	e2392517ff633       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   e28250f5d74c6       kube-apiserver-default-k8s-diff-port-073048            kube-system
	80edfa65fdbbf       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   f677fb3062537       etcd-default-k8s-diff-port-073048                      kube-system
	
	
	==> coredns [85a0cd12f7bae6ff28418619d144de140b817113f6c88788ab2809812fad422a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48284 - 7676 "HINFO IN 5921239710338305845.3204494590720952925. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012019176s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-073048
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-073048
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=default-k8s-diff-port-073048
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T20_02_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 20:02:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-073048
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 20:03:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 20:03:15 +0000   Mon, 27 Oct 2025 20:02:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 20:03:15 +0000   Mon, 27 Oct 2025 20:02:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 20:03:15 +0000   Mon, 27 Oct 2025 20:02:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 20:03:15 +0000   Mon, 27 Oct 2025 20:03:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-073048
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                dd93b306-3965-477c-8572-564479b43098
	  Boot ID:                    23ea05b4-8203-4fb9-a84a-5deb4b091cbb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-6vc9v                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     55s
	  kube-system                 etcd-default-k8s-diff-port-073048                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         61s
	  kube-system                 kindnet-qc8zw                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-default-k8s-diff-port-073048             250m (12%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-073048    200m (10%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-proxy-dsq46                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-default-k8s-diff-port-073048             100m (5%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 54s   kube-proxy       
	  Normal   Starting                 61s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s   kubelet          Node default-k8s-diff-port-073048 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s   kubelet          Node default-k8s-diff-port-073048 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s   kubelet          Node default-k8s-diff-port-073048 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s   node-controller  Node default-k8s-diff-port-073048 event: Registered Node default-k8s-diff-port-073048 in Controller
	  Normal   NodeReady                14s   kubelet          Node default-k8s-diff-port-073048 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct27 19:39] overlayfs: idmapped layers are currently not supported
	[Oct27 19:40] overlayfs: idmapped layers are currently not supported
	[  +7.015891] overlayfs: idmapped layers are currently not supported
	[Oct27 19:41] overlayfs: idmapped layers are currently not supported
	[ +28.455216] overlayfs: idmapped layers are currently not supported
	[Oct27 19:42] overlayfs: idmapped layers are currently not supported
	[ +26.490174] overlayfs: idmapped layers are currently not supported
	[Oct27 19:44] overlayfs: idmapped layers are currently not supported
	[Oct27 19:45] overlayfs: idmapped layers are currently not supported
	[Oct27 19:47] overlayfs: idmapped layers are currently not supported
	[Oct27 19:49] overlayfs: idmapped layers are currently not supported
	[ +31.410335] overlayfs: idmapped layers are currently not supported
	[Oct27 19:51] overlayfs: idmapped layers are currently not supported
	[Oct27 19:53] overlayfs: idmapped layers are currently not supported
	[Oct27 19:54] overlayfs: idmapped layers are currently not supported
	[Oct27 19:55] overlayfs: idmapped layers are currently not supported
	[Oct27 19:56] overlayfs: idmapped layers are currently not supported
	[Oct27 19:57] overlayfs: idmapped layers are currently not supported
	[Oct27 19:58] overlayfs: idmapped layers are currently not supported
	[Oct27 19:59] overlayfs: idmapped layers are currently not supported
	[Oct27 20:00] overlayfs: idmapped layers are currently not supported
	[ +41.321877] overlayfs: idmapped layers are currently not supported
	[Oct27 20:01] overlayfs: idmapped layers are currently not supported
	[Oct27 20:02] overlayfs: idmapped layers are currently not supported
	[Oct27 20:03] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [80edfa65fdbbfdb99a3ab570d2cdf2fb8db9572bb20625494728520f8bcc33fd] <==
	{"level":"warn","ts":"2025-10-27T20:02:23.842618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:02:23.911072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:02:23.959269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:02:24.011308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:02:24.060008Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:02:24.086949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:02:24.122522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:02:24.141726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:02:24.155180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:02:24.204205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:02:24.259159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:02:24.259828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:02:24.277837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:02:24.303944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:02:24.322037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:02:24.353922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:02:24.370369Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:02:24.399274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:02:24.431087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:02:24.447701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:02:24.491373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:02:24.546219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:02:24.562766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:02:24.584014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:02:24.709677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50460","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:03:29 up  2:46,  0 user,  load average: 4.02, 3.17, 2.72
	Linux default-k8s-diff-port-073048 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d23ab58b85e00456c427e2c1b1921afec1e896b4a51aba2f5ff99b4f2127452b] <==
	I1027 20:02:34.520805       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 20:02:34.521104       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1027 20:02:34.521231       1 main.go:148] setting mtu 1500 for CNI 
	I1027 20:02:34.521244       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 20:02:34.521253       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T20:02:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 20:02:34.745569       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 20:02:34.745730       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 20:02:34.745833       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 20:02:34.747335       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1027 20:03:04.741290       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1027 20:03:04.741401       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1027 20:03:04.741516       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1027 20:03:04.744833       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1027 20:03:06.348119       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 20:03:06.348161       1 metrics.go:72] Registering metrics
	I1027 20:03:06.348214       1 controller.go:711] "Syncing nftables rules"
	I1027 20:03:14.743738       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 20:03:14.743867       1 main.go:301] handling current node
	I1027 20:03:24.742466       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 20:03:24.742501       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e2392517ff6333d76dc1d63087de728819b134b0001fcb3f8e2eea60f57ffd27] <==
	I1027 20:02:25.987764       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1027 20:02:25.991766       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1027 20:02:26.006287       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 20:02:26.019630       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 20:02:26.020047       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1027 20:02:26.041310       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 20:02:26.049700       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 20:02:26.696793       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1027 20:02:26.702304       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1027 20:02:26.702329       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 20:02:27.483925       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 20:02:27.541298       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 20:02:27.627820       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1027 20:02:27.639893       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1027 20:02:27.641276       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 20:02:27.649611       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 20:02:28.047647       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 20:02:28.427254       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 20:02:28.449728       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1027 20:02:28.466872       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1027 20:02:33.897610       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1027 20:02:34.128324       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 20:02:34.150460       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 20:02:34.185536       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1027 20:03:27.405736       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:49294: use of closed network connection
	
	
	==> kube-controller-manager [54166af94f78759717c8b38477a142eb99400b7a8195ffbe8f4421dd42159fd5] <==
	I1027 20:02:33.059882       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 20:02:33.062122       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 20:02:33.074384       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 20:02:33.082430       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1027 20:02:33.082509       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1027 20:02:33.082599       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 20:02:33.082613       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 20:02:33.082644       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1027 20:02:33.083096       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 20:02:33.083857       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1027 20:02:33.083901       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1027 20:02:33.083961       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 20:02:33.084031       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 20:02:33.084121       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-073048"
	I1027 20:02:33.084166       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1027 20:02:33.086166       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 20:02:33.086267       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 20:02:33.088131       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1027 20:02:33.088478       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1027 20:02:33.090074       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 20:02:33.090280       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 20:02:33.094441       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 20:02:33.098772       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1027 20:02:33.118703       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 20:03:18.091438       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [3f7d47857b94a7f8d6a143355d64261dce8fc0f78f5728d4c0c8122df24a48ee] <==
	I1027 20:02:35.331546       1 server_linux.go:53] "Using iptables proxy"
	I1027 20:02:35.428844       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 20:02:35.529179       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 20:02:35.529303       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1027 20:02:35.529423       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 20:02:35.566329       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 20:02:35.566482       1 server_linux.go:132] "Using iptables Proxier"
	I1027 20:02:35.573332       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 20:02:35.573777       1 server.go:527] "Version info" version="v1.34.1"
	I1027 20:02:35.573966       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 20:02:35.575519       1 config.go:200] "Starting service config controller"
	I1027 20:02:35.575580       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 20:02:35.575624       1 config.go:106] "Starting endpoint slice config controller"
	I1027 20:02:35.575666       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 20:02:35.575714       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 20:02:35.575741       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 20:02:35.580188       1 config.go:309] "Starting node config controller"
	I1027 20:02:35.580283       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 20:02:35.580317       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 20:02:35.676183       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 20:02:35.676293       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 20:02:35.676321       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [12e7fd3d4126e91427a592901633fbc6e4bcc75e43ff34140e02f79ddb4f62c4] <==
	E1027 20:02:25.990936       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 20:02:25.991154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 20:02:25.991244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 20:02:25.991674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 20:02:25.991783       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1027 20:02:25.998706       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 20:02:25.998910       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 20:02:25.999042       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 20:02:25.999145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 20:02:25.999260       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 20:02:25.999398       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 20:02:25.999623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 20:02:25.999808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1027 20:02:26.000777       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 20:02:26.848169       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 20:02:26.850492       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1027 20:02:26.863098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 20:02:26.926430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 20:02:26.948174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 20:02:26.961608       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 20:02:26.981964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 20:02:27.011661       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 20:02:27.232388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 20:02:27.531762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1027 20:02:30.367209       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 20:02:33 default-k8s-diff-port-073048 kubelet[1295]: I1027 20:02:33.031579    1295 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 27 20:02:33 default-k8s-diff-port-073048 kubelet[1295]: I1027 20:02:33.032612    1295 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 27 20:02:33 default-k8s-diff-port-073048 kubelet[1295]: E1027 20:02:33.994916    1295 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:default-k8s-diff-port-073048\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'default-k8s-diff-port-073048' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 27 20:02:34 default-k8s-diff-port-073048 kubelet[1295]: I1027 20:02:34.037401    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/91a97ff3-0f9b-41c0-bbec-870515448861-kube-proxy\") pod \"kube-proxy-dsq46\" (UID: \"91a97ff3-0f9b-41c0-bbec-870515448861\") " pod="kube-system/kube-proxy-dsq46"
	Oct 27 20:02:34 default-k8s-diff-port-073048 kubelet[1295]: I1027 20:02:34.037454    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91a97ff3-0f9b-41c0-bbec-870515448861-xtables-lock\") pod \"kube-proxy-dsq46\" (UID: \"91a97ff3-0f9b-41c0-bbec-870515448861\") " pod="kube-system/kube-proxy-dsq46"
	Oct 27 20:02:34 default-k8s-diff-port-073048 kubelet[1295]: I1027 20:02:34.037481    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/10158916-6994-4c41-ba7d-e5bd80a7fd56-xtables-lock\") pod \"kindnet-qc8zw\" (UID: \"10158916-6994-4c41-ba7d-e5bd80a7fd56\") " pod="kube-system/kindnet-qc8zw"
	Oct 27 20:02:34 default-k8s-diff-port-073048 kubelet[1295]: I1027 20:02:34.037498    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/10158916-6994-4c41-ba7d-e5bd80a7fd56-lib-modules\") pod \"kindnet-qc8zw\" (UID: \"10158916-6994-4c41-ba7d-e5bd80a7fd56\") " pod="kube-system/kindnet-qc8zw"
	Oct 27 20:02:34 default-k8s-diff-port-073048 kubelet[1295]: I1027 20:02:34.037521    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbdzg\" (UniqueName: \"kubernetes.io/projected/91a97ff3-0f9b-41c0-bbec-870515448861-kube-api-access-pbdzg\") pod \"kube-proxy-dsq46\" (UID: \"91a97ff3-0f9b-41c0-bbec-870515448861\") " pod="kube-system/kube-proxy-dsq46"
	Oct 27 20:02:34 default-k8s-diff-port-073048 kubelet[1295]: I1027 20:02:34.037543    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/10158916-6994-4c41-ba7d-e5bd80a7fd56-cni-cfg\") pod \"kindnet-qc8zw\" (UID: \"10158916-6994-4c41-ba7d-e5bd80a7fd56\") " pod="kube-system/kindnet-qc8zw"
	Oct 27 20:02:34 default-k8s-diff-port-073048 kubelet[1295]: I1027 20:02:34.037559    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcfbz\" (UniqueName: \"kubernetes.io/projected/10158916-6994-4c41-ba7d-e5bd80a7fd56-kube-api-access-qcfbz\") pod \"kindnet-qc8zw\" (UID: \"10158916-6994-4c41-ba7d-e5bd80a7fd56\") " pod="kube-system/kindnet-qc8zw"
	Oct 27 20:02:34 default-k8s-diff-port-073048 kubelet[1295]: I1027 20:02:34.037578    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91a97ff3-0f9b-41c0-bbec-870515448861-lib-modules\") pod \"kube-proxy-dsq46\" (UID: \"91a97ff3-0f9b-41c0-bbec-870515448861\") " pod="kube-system/kube-proxy-dsq46"
	Oct 27 20:02:34 default-k8s-diff-port-073048 kubelet[1295]: I1027 20:02:34.182917    1295 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 27 20:02:34 default-k8s-diff-port-073048 kubelet[1295]: W1027 20:02:34.316237    1295 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0d0a6d2c139cfccb44f717c1d4bf1de32b26fb3f98fc7320c9146a486f5ddfbb/crio-38c01665d793831515ff40c2f5f165258b1ef17b73e7fc7bc49f3991ca149860 WatchSource:0}: Error finding container 38c01665d793831515ff40c2f5f165258b1ef17b73e7fc7bc49f3991ca149860: Status 404 returned error can't find the container with id 38c01665d793831515ff40c2f5f165258b1ef17b73e7fc7bc49f3991ca149860
	Oct 27 20:02:35 default-k8s-diff-port-073048 kubelet[1295]: I1027 20:02:35.453546    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-qc8zw" podStartSLOduration=2.453526332 podStartE2EDuration="2.453526332s" podCreationTimestamp="2025-10-27 20:02:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 20:02:34.62607044 +0000 UTC m=+6.289047220" watchObservedRunningTime="2025-10-27 20:02:35.453526332 +0000 UTC m=+7.116503112"
	Oct 27 20:02:35 default-k8s-diff-port-073048 kubelet[1295]: I1027 20:02:35.656632    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dsq46" podStartSLOduration=2.656612651 podStartE2EDuration="2.656612651s" podCreationTimestamp="2025-10-27 20:02:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 20:02:35.625418595 +0000 UTC m=+7.288395375" watchObservedRunningTime="2025-10-27 20:02:35.656612651 +0000 UTC m=+7.319589423"
	Oct 27 20:03:15 default-k8s-diff-port-073048 kubelet[1295]: I1027 20:03:15.298665    1295 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 27 20:03:15 default-k8s-diff-port-073048 kubelet[1295]: I1027 20:03:15.468041    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9fef6eaa-5369-4f2d-9dd1-f3d7074b9a77-tmp\") pod \"storage-provisioner\" (UID: \"9fef6eaa-5369-4f2d-9dd1-f3d7074b9a77\") " pod="kube-system/storage-provisioner"
	Oct 27 20:03:15 default-k8s-diff-port-073048 kubelet[1295]: I1027 20:03:15.468088    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8m7l\" (UniqueName: \"kubernetes.io/projected/9fef6eaa-5369-4f2d-9dd1-f3d7074b9a77-kube-api-access-z8m7l\") pod \"storage-provisioner\" (UID: \"9fef6eaa-5369-4f2d-9dd1-f3d7074b9a77\") " pod="kube-system/storage-provisioner"
	Oct 27 20:03:15 default-k8s-diff-port-073048 kubelet[1295]: I1027 20:03:15.468121    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkgfm\" (UniqueName: \"kubernetes.io/projected/5d420b85-b106-4d91-9ebd-483f8ccfa445-kube-api-access-nkgfm\") pod \"coredns-66bc5c9577-6vc9v\" (UID: \"5d420b85-b106-4d91-9ebd-483f8ccfa445\") " pod="kube-system/coredns-66bc5c9577-6vc9v"
	Oct 27 20:03:15 default-k8s-diff-port-073048 kubelet[1295]: I1027 20:03:15.468144    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d420b85-b106-4d91-9ebd-483f8ccfa445-config-volume\") pod \"coredns-66bc5c9577-6vc9v\" (UID: \"5d420b85-b106-4d91-9ebd-483f8ccfa445\") " pod="kube-system/coredns-66bc5c9577-6vc9v"
	Oct 27 20:03:16 default-k8s-diff-port-073048 kubelet[1295]: I1027 20:03:16.786375    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.786359286 podStartE2EDuration="42.786359286s" podCreationTimestamp="2025-10-27 20:02:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 20:03:16.785850623 +0000 UTC m=+48.448827395" watchObservedRunningTime="2025-10-27 20:03:16.786359286 +0000 UTC m=+48.449336066"
	Oct 27 20:03:19 default-k8s-diff-port-073048 kubelet[1295]: I1027 20:03:19.246424    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6vc9v" podStartSLOduration=45.246396251 podStartE2EDuration="45.246396251s" podCreationTimestamp="2025-10-27 20:02:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 20:03:16.8119572 +0000 UTC m=+48.474933980" watchObservedRunningTime="2025-10-27 20:03:19.246396251 +0000 UTC m=+50.909373023"
	Oct 27 20:03:19 default-k8s-diff-port-073048 kubelet[1295]: I1027 20:03:19.304406    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm5tg\" (UniqueName: \"kubernetes.io/projected/53db98e8-ffba-4a6b-b0b4-8145690263ae-kube-api-access-sm5tg\") pod \"busybox\" (UID: \"53db98e8-ffba-4a6b-b0b4-8145690263ae\") " pod="default/busybox"
	Oct 27 20:03:19 default-k8s-diff-port-073048 kubelet[1295]: W1027 20:03:19.609201    1295 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0d0a6d2c139cfccb44f717c1d4bf1de32b26fb3f98fc7320c9146a486f5ddfbb/crio-df6ffb28802a4ecc2498cd9f84adfa8c3cbd7c7f26b740e21c61f9f915fb6223 WatchSource:0}: Error finding container df6ffb28802a4ecc2498cd9f84adfa8c3cbd7c7f26b740e21c61f9f915fb6223: Status 404 returned error can't find the container with id df6ffb28802a4ecc2498cd9f84adfa8c3cbd7c7f26b740e21c61f9f915fb6223
	Oct 27 20:03:21 default-k8s-diff-port-073048 kubelet[1295]: I1027 20:03:21.791408    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.726555243 podStartE2EDuration="2.791386146s" podCreationTimestamp="2025-10-27 20:03:19 +0000 UTC" firstStartedPulling="2025-10-27 20:03:19.613754326 +0000 UTC m=+51.276731098" lastFinishedPulling="2025-10-27 20:03:21.678585229 +0000 UTC m=+53.341562001" observedRunningTime="2025-10-27 20:03:21.789194273 +0000 UTC m=+53.452171045" watchObservedRunningTime="2025-10-27 20:03:21.791386146 +0000 UTC m=+53.454362926"
	
	
	==> storage-provisioner [9a36c76fe6a5859e35e0d69fd9d671ac68f5400185d35649d296826e994562cc] <==
	I1027 20:03:15.840571       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1027 20:03:15.952673       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 20:03:15.969171       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1027 20:03:15.978786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:03:15.989018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 20:03:15.989265       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 20:03:15.989469       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-073048_9a3c8b32-c47b-4461-8f4c-1020d113d878!
	I1027 20:03:15.994729       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"19a71778-5006-4e48-afac-9e5dd7131511", APIVersion:"v1", ResourceVersion:"419", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-073048_9a3c8b32-c47b-4461-8f4c-1020d113d878 became leader
	W1027 20:03:16.007181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:03:16.015522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 20:03:16.092551       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-073048_9a3c8b32-c47b-4461-8f4c-1020d113d878!
	W1027 20:03:18.018978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:03:18.031289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:03:20.035165       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:03:20.041090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:03:22.045146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:03:22.054574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:03:24.058107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:03:24.063953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:03:26.068237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:03:26.076520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:03:28.081060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:03:28.091157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-073048 -n default-k8s-diff-port-073048
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-073048 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-702588 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-702588 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (330.124353ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T20:03:27Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-702588 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-702588
helpers_test.go:243: (dbg) docker inspect newest-cni-702588:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "129b04b839d9dc50e5b41cb4e74b918aa12b2f3b4e01d95ffdf708dd0ebe16e9",
	        "Created": "2025-10-27T20:02:51.266194536Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 470907,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T20:02:51.32964282Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/129b04b839d9dc50e5b41cb4e74b918aa12b2f3b4e01d95ffdf708dd0ebe16e9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/129b04b839d9dc50e5b41cb4e74b918aa12b2f3b4e01d95ffdf708dd0ebe16e9/hostname",
	        "HostsPath": "/var/lib/docker/containers/129b04b839d9dc50e5b41cb4e74b918aa12b2f3b4e01d95ffdf708dd0ebe16e9/hosts",
	        "LogPath": "/var/lib/docker/containers/129b04b839d9dc50e5b41cb4e74b918aa12b2f3b4e01d95ffdf708dd0ebe16e9/129b04b839d9dc50e5b41cb4e74b918aa12b2f3b4e01d95ffdf708dd0ebe16e9-json.log",
	        "Name": "/newest-cni-702588",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-702588:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-702588",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "129b04b839d9dc50e5b41cb4e74b918aa12b2f3b4e01d95ffdf708dd0ebe16e9",
	                "LowerDir": "/var/lib/docker/overlay2/e3fc032b9e2d815c987fdb238001c8985f39b4a9a5af185df55765123364c912-init/diff:/var/lib/docker/overlay2/0567218bbe05e8b59b2aef7ad82032d6716040c1f9d9d73e7de8682101079f76/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e3fc032b9e2d815c987fdb238001c8985f39b4a9a5af185df55765123364c912/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e3fc032b9e2d815c987fdb238001c8985f39b4a9a5af185df55765123364c912/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e3fc032b9e2d815c987fdb238001c8985f39b4a9a5af185df55765123364c912/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-702588",
	                "Source": "/var/lib/docker/volumes/newest-cni-702588/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-702588",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-702588",
	                "name.minikube.sigs.k8s.io": "newest-cni-702588",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ed0e27f91b5f434353afe228c5ba62f445f665b192782feba7e11c1812a817bd",
	            "SandboxKey": "/var/run/docker/netns/ed0e27f91b5f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-702588": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:bd:98:b4:ae:59",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "534dc751a83fae44f47788d8acf2bcc801410f0472bc4104a1e93fed2fe7f7ff",
	                    "EndpointID": "de51de0524b3d84b36382f24582978b8597510d68745f7a971eb456e911d24c1",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-702588",
	                        "129b04b839d9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-702588 -n newest-cni-702588
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-702588 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-702588 logs -n 25: (1.44210565s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p old-k8s-version-942644 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-942644       │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │                     │
	│ delete  │ -p old-k8s-version-942644                                                                                                                                                                                                                     │ old-k8s-version-942644       │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │ 27 Oct 25 19:59 UTC │
	│ delete  │ -p old-k8s-version-942644                                                                                                                                                                                                                     │ old-k8s-version-942644       │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │ 27 Oct 25 19:59 UTC │
	│ start   │ -p embed-certs-629838 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 19:59 UTC │ 27 Oct 25 20:01 UTC │
	│ addons  │ enable metrics-server -p no-preload-300878 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:00 UTC │                     │
	│ stop    │ -p no-preload-300878 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:00 UTC │ 27 Oct 25 20:00 UTC │
	│ addons  │ enable dashboard -p no-preload-300878 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:00 UTC │ 27 Oct 25 20:00 UTC │
	│ start   │ -p no-preload-300878 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:00 UTC │ 27 Oct 25 20:01 UTC │
	│ addons  │ enable metrics-server -p embed-certs-629838 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │                     │
	│ stop    │ -p embed-certs-629838 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:01 UTC │
	│ addons  │ enable dashboard -p embed-certs-629838 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:01 UTC │
	│ start   │ -p embed-certs-629838 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:02 UTC │
	│ image   │ no-preload-300878 image list --format=json                                                                                                                                                                                                    │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:01 UTC │
	│ pause   │ -p no-preload-300878 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │                     │
	│ delete  │ -p no-preload-300878                                                                                                                                                                                                                          │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:01 UTC │
	│ delete  │ -p no-preload-300878                                                                                                                                                                                                                          │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:01 UTC │
	│ delete  │ -p disable-driver-mounts-230052                                                                                                                                                                                                               │ disable-driver-mounts-230052 │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │ 27 Oct 25 20:02 UTC │
	│ start   │ -p default-k8s-diff-port-073048 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-073048 │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │ 27 Oct 25 20:03 UTC │
	│ image   │ embed-certs-629838 image list --format=json                                                                                                                                                                                                   │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │ 27 Oct 25 20:02 UTC │
	│ pause   │ -p embed-certs-629838 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │                     │
	│ delete  │ -p embed-certs-629838                                                                                                                                                                                                                         │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │ 27 Oct 25 20:02 UTC │
	│ delete  │ -p embed-certs-629838                                                                                                                                                                                                                         │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │ 27 Oct 25 20:02 UTC │
	│ start   │ -p newest-cni-702588 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-702588            │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │ 27 Oct 25 20:03 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-073048 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-073048 │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-702588 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-702588            │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 20:02:45
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 20:02:45.785716  470518 out.go:360] Setting OutFile to fd 1 ...
	I1027 20:02:45.785895  470518 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:02:45.785905  470518 out.go:374] Setting ErrFile to fd 2...
	I1027 20:02:45.785910  470518 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:02:45.786184  470518 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 20:02:45.786610  470518 out.go:368] Setting JSON to false
	I1027 20:02:45.787737  470518 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9918,"bootTime":1761585448,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1027 20:02:45.787809  470518 start.go:141] virtualization:  
	I1027 20:02:45.791930  470518 out.go:179] * [newest-cni-702588] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 20:02:45.796379  470518 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 20:02:45.796489  470518 notify.go:220] Checking for updates...
	I1027 20:02:45.803150  470518 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 20:02:45.806282  470518 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 20:02:45.809281  470518 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-266035/.minikube
	I1027 20:02:45.812543  470518 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 20:02:45.815585  470518 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 20:02:45.819122  470518 config.go:182] Loaded profile config "default-k8s-diff-port-073048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:02:45.819266  470518 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 20:02:45.846690  470518 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 20:02:45.846892  470518 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 20:02:45.912213  470518 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-27 20:02:45.903534386 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 20:02:45.912318  470518 docker.go:318] overlay module found
	I1027 20:02:45.915578  470518 out.go:179] * Using the docker driver based on user configuration
	I1027 20:02:45.918393  470518 start.go:305] selected driver: docker
	I1027 20:02:45.918412  470518 start.go:925] validating driver "docker" against <nil>
	I1027 20:02:45.918427  470518 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 20:02:45.919263  470518 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 20:02:45.975538  470518 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-27 20:02:45.965754848 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 20:02:45.975709  470518 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1027 20:02:45.975741  470518 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1027 20:02:45.975965  470518 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1027 20:02:45.978728  470518 out.go:179] * Using Docker driver with root privileges
	I1027 20:02:45.981758  470518 cni.go:84] Creating CNI manager for ""
	I1027 20:02:45.981845  470518 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 20:02:45.981862  470518 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 20:02:45.981943  470518 start.go:349] cluster config:
	{Name:newest-cni-702588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-702588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 20:02:45.986970  470518 out.go:179] * Starting "newest-cni-702588" primary control-plane node in "newest-cni-702588" cluster
	I1027 20:02:45.989827  470518 cache.go:123] Beginning downloading kic base image for docker with crio
	I1027 20:02:45.992790  470518 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 20:02:45.995596  470518 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 20:02:45.995668  470518 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 20:02:45.995686  470518 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1027 20:02:45.995696  470518 cache.go:58] Caching tarball of preloaded images
	I1027 20:02:45.995795  470518 preload.go:233] Found /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1027 20:02:45.995804  470518 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 20:02:45.995913  470518 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/config.json ...
	I1027 20:02:45.995946  470518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/config.json: {Name:mk88123d02d0184d7c1eca8717c120dcfee3cace Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:02:46.017272  470518 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 20:02:46.017300  470518 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 20:02:46.017320  470518 cache.go:232] Successfully downloaded all kic artifacts
	I1027 20:02:46.017343  470518 start.go:360] acquireMachinesLock for newest-cni-702588: {Name:mkcad9a0641a8c73353a267f147f59ff63030507 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 20:02:46.017466  470518 start.go:364] duration metric: took 91.591µs to acquireMachinesLock for "newest-cni-702588"
	I1027 20:02:46.017500  470518 start.go:93] Provisioning new machine with config: &{Name:newest-cni-702588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-702588 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 20:02:46.017596  470518 start.go:125] createHost starting for "" (driver="docker")
	W1027 20:02:47.567236  466537 node_ready.go:57] node "default-k8s-diff-port-073048" has "Ready":"False" status (will retry)
	W1027 20:02:49.568105  466537 node_ready.go:57] node "default-k8s-diff-port-073048" has "Ready":"False" status (will retry)
	I1027 20:02:46.021834  470518 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1027 20:02:46.022092  470518 start.go:159] libmachine.API.Create for "newest-cni-702588" (driver="docker")
	I1027 20:02:46.022148  470518 client.go:168] LocalClient.Create starting
	I1027 20:02:46.022226  470518 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem
	I1027 20:02:46.022274  470518 main.go:141] libmachine: Decoding PEM data...
	I1027 20:02:46.022290  470518 main.go:141] libmachine: Parsing certificate...
	I1027 20:02:46.022347  470518 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem
	I1027 20:02:46.022369  470518 main.go:141] libmachine: Decoding PEM data...
	I1027 20:02:46.022380  470518 main.go:141] libmachine: Parsing certificate...
	I1027 20:02:46.022768  470518 cli_runner.go:164] Run: docker network inspect newest-cni-702588 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1027 20:02:46.039397  470518 cli_runner.go:211] docker network inspect newest-cni-702588 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1027 20:02:46.039495  470518 network_create.go:284] running [docker network inspect newest-cni-702588] to gather additional debugging logs...
	I1027 20:02:46.039520  470518 cli_runner.go:164] Run: docker network inspect newest-cni-702588
	W1027 20:02:46.055655  470518 cli_runner.go:211] docker network inspect newest-cni-702588 returned with exit code 1
	I1027 20:02:46.055690  470518 network_create.go:287] error running [docker network inspect newest-cni-702588]: docker network inspect newest-cni-702588: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-702588 not found
	I1027 20:02:46.055706  470518 network_create.go:289] output of [docker network inspect newest-cni-702588]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-702588 not found
	
	** /stderr **
	I1027 20:02:46.055871  470518 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 20:02:46.075608  470518 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-74ee89127400 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:aa:c7:29:bd:7a:a9} reservation:<nil>}
	I1027 20:02:46.076304  470518 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-9c57ca829ac8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:0a:24:08:53:d6:07} reservation:<nil>}
	I1027 20:02:46.076757  470518 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b7a7c45fd176 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f6:52:e6:cd:c8:a6} reservation:<nil>}
	I1027 20:02:46.077258  470518 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a07ea0}
	I1027 20:02:46.077280  470518 network_create.go:124] attempt to create docker network newest-cni-702588 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1027 20:02:46.077342  470518 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-702588 newest-cni-702588
	I1027 20:02:46.141479  470518 network_create.go:108] docker network newest-cni-702588 192.168.76.0/24 created
	I1027 20:02:46.141508  470518 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-702588" container
	I1027 20:02:46.141591  470518 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1027 20:02:46.158005  470518 cli_runner.go:164] Run: docker volume create newest-cni-702588 --label name.minikube.sigs.k8s.io=newest-cni-702588 --label created_by.minikube.sigs.k8s.io=true
	I1027 20:02:46.174775  470518 oci.go:103] Successfully created a docker volume newest-cni-702588
	I1027 20:02:46.174870  470518 cli_runner.go:164] Run: docker run --rm --name newest-cni-702588-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-702588 --entrypoint /usr/bin/test -v newest-cni-702588:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1027 20:02:46.737552  470518 oci.go:107] Successfully prepared a docker volume newest-cni-702588
	I1027 20:02:46.737621  470518 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 20:02:46.737644  470518 kic.go:194] Starting extracting preloaded images to volume ...
	I1027 20:02:46.737729  470518 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-702588:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1027 20:02:51.568853  466537 node_ready.go:57] node "default-k8s-diff-port-073048" has "Ready":"False" status (will retry)
	W1027 20:02:54.067221  466537 node_ready.go:57] node "default-k8s-diff-port-073048" has "Ready":"False" status (will retry)
	I1027 20:02:51.191470  470518 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-702588:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.453697221s)
	I1027 20:02:51.191514  470518 kic.go:203] duration metric: took 4.453867103s to extract preloaded images to volume ...
	W1027 20:02:51.191680  470518 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1027 20:02:51.191801  470518 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1027 20:02:51.250675  470518 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-702588 --name newest-cni-702588 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-702588 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-702588 --network newest-cni-702588 --ip 192.168.76.2 --volume newest-cni-702588:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1027 20:02:51.572912  470518 cli_runner.go:164] Run: docker container inspect newest-cni-702588 --format={{.State.Running}}
	I1027 20:02:51.595181  470518 cli_runner.go:164] Run: docker container inspect newest-cni-702588 --format={{.State.Status}}
	I1027 20:02:51.627389  470518 cli_runner.go:164] Run: docker exec newest-cni-702588 stat /var/lib/dpkg/alternatives/iptables
	I1027 20:02:51.685541  470518 oci.go:144] the created container "newest-cni-702588" has a running status.
	I1027 20:02:51.685568  470518 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21801-266035/.minikube/machines/newest-cni-702588/id_rsa...
	I1027 20:02:51.906722  470518 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21801-266035/.minikube/machines/newest-cni-702588/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1027 20:02:51.930734  470518 cli_runner.go:164] Run: docker container inspect newest-cni-702588 --format={{.State.Status}}
	I1027 20:02:51.951290  470518 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1027 20:02:51.951314  470518 kic_runner.go:114] Args: [docker exec --privileged newest-cni-702588 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1027 20:02:52.004192  470518 cli_runner.go:164] Run: docker container inspect newest-cni-702588 --format={{.State.Status}}
	I1027 20:02:52.033952  470518 machine.go:93] provisionDockerMachine start ...
	I1027 20:02:52.034051  470518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-702588
	I1027 20:02:52.063844  470518 main.go:141] libmachine: Using SSH client type: native
	I1027 20:02:52.065561  470518 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1027 20:02:52.065723  470518 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 20:02:52.066772  470518 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1027 20:02:55.222678  470518 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-702588
	
	I1027 20:02:55.222700  470518 ubuntu.go:182] provisioning hostname "newest-cni-702588"
	I1027 20:02:55.222784  470518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-702588
	I1027 20:02:55.240986  470518 main.go:141] libmachine: Using SSH client type: native
	I1027 20:02:55.241292  470518 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1027 20:02:55.241303  470518 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-702588 && echo "newest-cni-702588" | sudo tee /etc/hostname
	I1027 20:02:55.405296  470518 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-702588
	
	I1027 20:02:55.405398  470518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-702588
	I1027 20:02:55.425877  470518 main.go:141] libmachine: Using SSH client type: native
	I1027 20:02:55.426323  470518 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1027 20:02:55.426378  470518 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-702588' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-702588/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-702588' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 20:02:55.579329  470518 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 20:02:55.579401  470518 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21801-266035/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-266035/.minikube}
	I1027 20:02:55.579441  470518 ubuntu.go:190] setting up certificates
	I1027 20:02:55.579483  470518 provision.go:84] configureAuth start
	I1027 20:02:55.579568  470518 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-702588
	I1027 20:02:55.595945  470518 provision.go:143] copyHostCerts
	I1027 20:02:55.596015  470518 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem, removing ...
	I1027 20:02:55.596029  470518 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem
	I1027 20:02:55.596110  470518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem (1078 bytes)
	I1027 20:02:55.596214  470518 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem, removing ...
	I1027 20:02:55.596226  470518 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem
	I1027 20:02:55.596256  470518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem (1123 bytes)
	I1027 20:02:55.596324  470518 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem, removing ...
	I1027 20:02:55.596335  470518 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem
	I1027 20:02:55.596359  470518 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem (1675 bytes)
	I1027 20:02:55.596415  470518 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem org=jenkins.newest-cni-702588 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-702588]
	I1027 20:02:55.972333  470518 provision.go:177] copyRemoteCerts
	I1027 20:02:55.972402  470518 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 20:02:55.972448  470518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-702588
	I1027 20:02:55.990336  470518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/newest-cni-702588/id_rsa Username:docker}
	I1027 20:02:56.103301  470518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 20:02:56.121966  470518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 20:02:56.141729  470518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1027 20:02:56.159694  470518 provision.go:87] duration metric: took 580.165595ms to configureAuth
	I1027 20:02:56.159745  470518 ubuntu.go:206] setting minikube options for container-runtime
	I1027 20:02:56.160029  470518 config.go:182] Loaded profile config "newest-cni-702588": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:02:56.160152  470518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-702588
	I1027 20:02:56.177264  470518 main.go:141] libmachine: Using SSH client type: native
	I1027 20:02:56.177664  470518 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I1027 20:02:56.177683  470518 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 20:02:56.436658  470518 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 20:02:56.436722  470518 machine.go:96] duration metric: took 4.402751639s to provisionDockerMachine
	I1027 20:02:56.436749  470518 client.go:171] duration metric: took 10.414590673s to LocalClient.Create
	I1027 20:02:56.436776  470518 start.go:167] duration metric: took 10.414685899s to libmachine.API.Create "newest-cni-702588"
	I1027 20:02:56.436809  470518 start.go:293] postStartSetup for "newest-cni-702588" (driver="docker")
	I1027 20:02:56.436838  470518 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 20:02:56.436934  470518 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 20:02:56.436995  470518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-702588
	I1027 20:02:56.463424  470518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/newest-cni-702588/id_rsa Username:docker}
	I1027 20:02:56.571490  470518 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 20:02:56.574600  470518 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 20:02:56.574641  470518 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 20:02:56.574654  470518 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-266035/.minikube/addons for local assets ...
	I1027 20:02:56.574726  470518 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-266035/.minikube/files for local assets ...
	I1027 20:02:56.574819  470518 filesync.go:149] local asset: /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem -> 2678802.pem in /etc/ssl/certs
	I1027 20:02:56.574926  470518 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 20:02:56.582246  470518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem --> /etc/ssl/certs/2678802.pem (1708 bytes)
	I1027 20:02:56.599835  470518 start.go:296] duration metric: took 162.992741ms for postStartSetup
	I1027 20:02:56.600217  470518 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-702588
	I1027 20:02:56.619053  470518 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/config.json ...
	I1027 20:02:56.619357  470518 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 20:02:56.619405  470518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-702588
	I1027 20:02:56.636120  470518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/newest-cni-702588/id_rsa Username:docker}
	I1027 20:02:56.735887  470518 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 20:02:56.740445  470518 start.go:128] duration metric: took 10.722833636s to createHost
	I1027 20:02:56.740470  470518 start.go:83] releasing machines lock for "newest-cni-702588", held for 10.722988479s
	I1027 20:02:56.740539  470518 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-702588
	I1027 20:02:56.756441  470518 ssh_runner.go:195] Run: cat /version.json
	I1027 20:02:56.756541  470518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-702588
	I1027 20:02:56.756546  470518 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 20:02:56.756607  470518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-702588
	I1027 20:02:56.775205  470518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/newest-cni-702588/id_rsa Username:docker}
	I1027 20:02:56.783132  470518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/newest-cni-702588/id_rsa Username:docker}
	I1027 20:02:56.878712  470518 ssh_runner.go:195] Run: systemctl --version
	I1027 20:02:56.967735  470518 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 20:02:57.007063  470518 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 20:02:57.012709  470518 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 20:02:57.012786  470518 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 20:02:57.043379  470518 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1027 20:02:57.043451  470518 start.go:495] detecting cgroup driver to use...
	I1027 20:02:57.043499  470518 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1027 20:02:57.043578  470518 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 20:02:57.061856  470518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 20:02:57.076640  470518 docker.go:218] disabling cri-docker service (if available) ...
	I1027 20:02:57.076722  470518 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 20:02:57.094067  470518 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 20:02:57.113729  470518 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 20:02:57.240224  470518 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 20:02:57.365029  470518 docker.go:234] disabling docker service ...
	I1027 20:02:57.365141  470518 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 20:02:57.387737  470518 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 20:02:57.401018  470518 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 20:02:57.524719  470518 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 20:02:57.641021  470518 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 20:02:57.654389  470518 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 20:02:57.677868  470518 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 20:02:57.677977  470518 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:02:57.687502  470518 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 20:02:57.687573  470518 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:02:57.696839  470518 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:02:57.706341  470518 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:02:57.715921  470518 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 20:02:57.724702  470518 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:02:57.733674  470518 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:02:57.746868  470518 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:02:57.755827  470518 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 20:02:57.764173  470518 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 20:02:57.771411  470518 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:02:57.884115  470518 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 20:02:58.008496  470518 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 20:02:58.008629  470518 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 20:02:58.013198  470518 start.go:563] Will wait 60s for crictl version
	I1027 20:02:58.013265  470518 ssh_runner.go:195] Run: which crictl
	I1027 20:02:58.017238  470518 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 20:02:58.042783  470518 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 20:02:58.042873  470518 ssh_runner.go:195] Run: crio --version
	I1027 20:02:58.075048  470518 ssh_runner.go:195] Run: crio --version
	I1027 20:02:58.109676  470518 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 20:02:58.112614  470518 cli_runner.go:164] Run: docker network inspect newest-cni-702588 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 20:02:58.129013  470518 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1027 20:02:58.132766  470518 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 20:02:58.145318  470518 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1027 20:02:56.067694  466537 node_ready.go:57] node "default-k8s-diff-port-073048" has "Ready":"False" status (will retry)
	W1027 20:02:58.068165  466537 node_ready.go:57] node "default-k8s-diff-port-073048" has "Ready":"False" status (will retry)
	W1027 20:03:00.068986  466537 node_ready.go:57] node "default-k8s-diff-port-073048" has "Ready":"False" status (will retry)
	I1027 20:02:58.148122  470518 kubeadm.go:883] updating cluster {Name:newest-cni-702588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-702588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 20:02:58.148271  470518 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 20:02:58.148361  470518 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 20:02:58.181657  470518 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 20:02:58.181678  470518 crio.go:433] Images already preloaded, skipping extraction
	I1027 20:02:58.181732  470518 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 20:02:58.214800  470518 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 20:02:58.214822  470518 cache_images.go:85] Images are preloaded, skipping loading
	I1027 20:02:58.214830  470518 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1027 20:02:58.214950  470518 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-702588 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-702588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 20:02:58.215079  470518 ssh_runner.go:195] Run: crio config
	I1027 20:02:58.286344  470518 cni.go:84] Creating CNI manager for ""
	I1027 20:02:58.286416  470518 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 20:02:58.286459  470518 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1027 20:02:58.286499  470518 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-702588 NodeName:newest-cni-702588 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 20:02:58.286691  470518 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-702588"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 20:02:58.286796  470518 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 20:02:58.294657  470518 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 20:02:58.294765  470518 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 20:02:58.302317  470518 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1027 20:02:58.315914  470518 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 20:02:58.328925  470518 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1027 20:02:58.342200  470518 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1027 20:02:58.345655  470518 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 20:02:58.355109  470518 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:02:58.472732  470518 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 20:02:58.490422  470518 certs.go:69] Setting up /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588 for IP: 192.168.76.2
	I1027 20:02:58.490495  470518 certs.go:195] generating shared ca certs ...
	I1027 20:02:58.490531  470518 certs.go:227] acquiring lock for ca certs: {Name:mk172548fc54811b51040cc0201d01eb4b3dd19b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:02:58.490702  470518 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21801-266035/.minikube/ca.key
	I1027 20:02:58.490783  470518 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.key
	I1027 20:02:58.490818  470518 certs.go:257] generating profile certs ...
	I1027 20:02:58.490900  470518 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/client.key
	I1027 20:02:58.490947  470518 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/client.crt with IP's: []
	I1027 20:02:58.978122  470518 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/client.crt ...
	I1027 20:02:58.978153  470518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/client.crt: {Name:mk2a4be85dd65c523fd79ea6e7981ba2d675e3ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:02:58.978344  470518 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/client.key ...
	I1027 20:02:58.978358  470518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/client.key: {Name:mk1c73e2c4102b4119289b5f83fae52729a6438c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:02:58.978449  470518 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/apiserver.key.585f5f02
	I1027 20:02:58.978466  470518 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/apiserver.crt.585f5f02 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1027 20:02:59.025760  470518 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/apiserver.crt.585f5f02 ...
	I1027 20:02:59.025789  470518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/apiserver.crt.585f5f02: {Name:mka306d141988b7c3e248da0a02dd7daef042114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:02:59.025964  470518 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/apiserver.key.585f5f02 ...
	I1027 20:02:59.025978  470518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/apiserver.key.585f5f02: {Name:mk4c9caea83796ee96b375f57be4ebb1dc60fc1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:02:59.026066  470518 certs.go:382] copying /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/apiserver.crt.585f5f02 -> /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/apiserver.crt
	I1027 20:02:59.026145  470518 certs.go:386] copying /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/apiserver.key.585f5f02 -> /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/apiserver.key
	I1027 20:02:59.026212  470518 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/proxy-client.key
	I1027 20:02:59.026231  470518 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/proxy-client.crt with IP's: []
	I1027 20:02:59.674125  470518 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/proxy-client.crt ...
	I1027 20:02:59.674155  470518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/proxy-client.crt: {Name:mk9281e4c5affe91765a1ef4958a505bd69a3b34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:02:59.674347  470518 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/proxy-client.key ...
	I1027 20:02:59.674363  470518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/proxy-client.key: {Name:mk377369caab2efb73b63f57b628b5a90c74fe25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:02:59.674551  470518 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880.pem (1338 bytes)
	W1027 20:02:59.674594  470518 certs.go:480] ignoring /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880_empty.pem, impossibly tiny 0 bytes
	I1027 20:02:59.674608  470518 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 20:02:59.674633  470518 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem (1078 bytes)
	I1027 20:02:59.674660  470518 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem (1123 bytes)
	I1027 20:02:59.674684  470518 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem (1675 bytes)
	I1027 20:02:59.674730  470518 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem (1708 bytes)
	I1027 20:02:59.675360  470518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 20:02:59.695470  470518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 20:02:59.714902  470518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 20:02:59.734632  470518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 20:02:59.753167  470518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1027 20:02:59.772811  470518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 20:02:59.790847  470518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 20:02:59.810227  470518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/newest-cni-702588/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 20:02:59.827972  470518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880.pem --> /usr/share/ca-certificates/267880.pem (1338 bytes)
	I1027 20:02:59.846040  470518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem --> /usr/share/ca-certificates/2678802.pem (1708 bytes)
	I1027 20:02:59.863574  470518 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 20:02:59.882829  470518 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 20:02:59.895829  470518 ssh_runner.go:195] Run: openssl version
	I1027 20:02:59.902038  470518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/267880.pem && ln -fs /usr/share/ca-certificates/267880.pem /etc/ssl/certs/267880.pem"
	I1027 20:02:59.910215  470518 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/267880.pem
	I1027 20:02:59.915495  470518 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 19:04 /usr/share/ca-certificates/267880.pem
	I1027 20:02:59.915613  470518 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/267880.pem
	I1027 20:02:59.957262  470518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/267880.pem /etc/ssl/certs/51391683.0"
	I1027 20:02:59.965553  470518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2678802.pem && ln -fs /usr/share/ca-certificates/2678802.pem /etc/ssl/certs/2678802.pem"
	I1027 20:02:59.973736  470518 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2678802.pem
	I1027 20:02:59.977126  470518 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 19:04 /usr/share/ca-certificates/2678802.pem
	I1027 20:02:59.977237  470518 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2678802.pem
	I1027 20:03:00.019985  470518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2678802.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 20:03:00.031660  470518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 20:03:00.044149  470518 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:03:00.049549  470518 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:03:00.049719  470518 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:03:00.147420  470518 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 20:03:00.174266  470518 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 20:03:00.179958  470518 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 20:03:00.180085  470518 kubeadm.go:400] StartCluster: {Name:newest-cni-702588 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-702588 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 20:03:00.180256  470518 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 20:03:00.180364  470518 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 20:03:00.267055  470518 cri.go:89] found id: ""
	I1027 20:03:00.267153  470518 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 20:03:00.312184  470518 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 20:03:00.360263  470518 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1027 20:03:00.360350  470518 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 20:03:00.397656  470518 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 20:03:00.397674  470518 kubeadm.go:157] found existing configuration files:
	
	I1027 20:03:00.397745  470518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 20:03:00.427842  470518 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 20:03:00.427954  470518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 20:03:00.440940  470518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 20:03:00.456111  470518 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 20:03:00.456241  470518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 20:03:00.467906  470518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 20:03:00.479594  470518 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 20:03:00.479750  470518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 20:03:00.491415  470518 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 20:03:00.502295  470518 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 20:03:00.502401  470518 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 20:03:00.512777  470518 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 20:03:00.566927  470518 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1027 20:03:00.567306  470518 kubeadm.go:318] [preflight] Running pre-flight checks
	I1027 20:03:00.603122  470518 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1027 20:03:00.603251  470518 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1027 20:03:00.603323  470518 kubeadm.go:318] OS: Linux
	I1027 20:03:00.603403  470518 kubeadm.go:318] CGROUPS_CPU: enabled
	I1027 20:03:00.603489  470518 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1027 20:03:00.603573  470518 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1027 20:03:00.603654  470518 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1027 20:03:00.603714  470518 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1027 20:03:00.603771  470518 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1027 20:03:00.603826  470518 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1027 20:03:00.603900  470518 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1027 20:03:00.603959  470518 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1027 20:03:00.697183  470518 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 20:03:00.697301  470518 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 20:03:00.697401  470518 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 20:03:00.705858  470518 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 20:03:00.711446  470518 out.go:252]   - Generating certificates and keys ...
	I1027 20:03:00.711558  470518 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1027 20:03:00.711646  470518 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	W1027 20:03:02.567799  466537 node_ready.go:57] node "default-k8s-diff-port-073048" has "Ready":"False" status (will retry)
	W1027 20:03:05.069096  466537 node_ready.go:57] node "default-k8s-diff-port-073048" has "Ready":"False" status (will retry)
	I1027 20:03:01.769019  470518 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 20:03:02.074071  470518 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1027 20:03:02.832146  470518 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1027 20:03:03.617058  470518 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1027 20:03:03.921467  470518 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1027 20:03:03.921949  470518 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-702588] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1027 20:03:04.503315  470518 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1027 20:03:04.503524  470518 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-702588] and IPs [192.168.76.2 127.0.0.1 ::1]
	W1027 20:03:07.569270  466537 node_ready.go:57] node "default-k8s-diff-port-073048" has "Ready":"False" status (will retry)
	W1027 20:03:10.067868  466537 node_ready.go:57] node "default-k8s-diff-port-073048" has "Ready":"False" status (will retry)
	I1027 20:03:05.845312  470518 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 20:03:07.409001  470518 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 20:03:08.353102  470518 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1027 20:03:08.353425  470518 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 20:03:09.162948  470518 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 20:03:09.800753  470518 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 20:03:09.981966  470518 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 20:03:10.108623  470518 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 20:03:10.713603  470518 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 20:03:10.714213  470518 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 20:03:10.716889  470518 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 20:03:10.720135  470518 out.go:252]   - Booting up control plane ...
	I1027 20:03:10.720234  470518 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 20:03:10.720314  470518 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 20:03:10.721825  470518 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 20:03:10.748215  470518 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 20:03:10.748484  470518 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 20:03:10.756529  470518 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 20:03:10.756776  470518 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 20:03:10.756966  470518 kubeadm.go:318] [kubelet-start] Starting the kubelet
	W1027 20:03:12.068273  466537 node_ready.go:57] node "default-k8s-diff-port-073048" has "Ready":"False" status (will retry)
	W1027 20:03:14.567263  466537 node_ready.go:57] node "default-k8s-diff-port-073048" has "Ready":"False" status (will retry)
	I1027 20:03:10.880506  470518 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 20:03:10.880637  470518 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 20:03:12.883422  470518 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.000880783s
	I1027 20:03:12.884877  470518 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 20:03:12.884990  470518 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1027 20:03:12.885103  470518 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 20:03:12.885191  470518 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 20:03:15.567604  466537 node_ready.go:49] node "default-k8s-diff-port-073048" is "Ready"
	I1027 20:03:15.567644  466537 node_ready.go:38] duration metric: took 41.00349104s for node "default-k8s-diff-port-073048" to be "Ready" ...
	I1027 20:03:15.567660  466537 api_server.go:52] waiting for apiserver process to appear ...
	I1027 20:03:15.567721  466537 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 20:03:15.601316  466537 api_server.go:72] duration metric: took 41.979225195s to wait for apiserver process to appear ...
	I1027 20:03:15.601337  466537 api_server.go:88] waiting for apiserver healthz status ...
	I1027 20:03:15.601355  466537 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1027 20:03:15.631459  466537 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1027 20:03:15.632598  466537 api_server.go:141] control plane version: v1.34.1
	I1027 20:03:15.632618  466537 api_server.go:131] duration metric: took 31.27399ms to wait for apiserver health ...
	I1027 20:03:15.632626  466537 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 20:03:15.637564  466537 system_pods.go:59] 8 kube-system pods found
	I1027 20:03:15.637593  466537 system_pods.go:61] "coredns-66bc5c9577-6vc9v" [5d420b85-b106-4d91-9ebd-483f8ccfa445] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:03:15.637607  466537 system_pods.go:61] "etcd-default-k8s-diff-port-073048" [13538216-46ea-4c98-a89b-a4d0be59d38b] Running
	I1027 20:03:15.637613  466537 system_pods.go:61] "kindnet-qc8zw" [10158916-6994-4c41-ba7d-e5bd80a7fd56] Running
	I1027 20:03:15.637617  466537 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-073048" [7bb63b28-d55e-4f2f-bd52-0dbce0ee5858] Running
	I1027 20:03:15.637622  466537 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-073048" [502a218b-490e-47f8-b946-6d2df9e30913] Running
	I1027 20:03:15.637626  466537 system_pods.go:61] "kube-proxy-dsq46" [91a97ff3-0f9b-41c0-bbec-870515448861] Running
	I1027 20:03:15.637630  466537 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-073048" [b090cbea-1aa4-46ff-a144-7a93a8fdeece] Running
	I1027 20:03:15.637636  466537 system_pods.go:61] "storage-provisioner" [9fef6eaa-5369-4f2d-9dd1-f3d7074b9a77] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 20:03:15.637643  466537 system_pods.go:74] duration metric: took 5.01019ms to wait for pod list to return data ...
	I1027 20:03:15.637651  466537 default_sa.go:34] waiting for default service account to be created ...
	I1027 20:03:15.640036  466537 default_sa.go:45] found service account: "default"
	I1027 20:03:15.640086  466537 default_sa.go:55] duration metric: took 2.429626ms for default service account to be created ...
	I1027 20:03:15.640109  466537 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 20:03:15.643233  466537 system_pods.go:86] 8 kube-system pods found
	I1027 20:03:15.643294  466537 system_pods.go:89] "coredns-66bc5c9577-6vc9v" [5d420b85-b106-4d91-9ebd-483f8ccfa445] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:03:15.643315  466537 system_pods.go:89] "etcd-default-k8s-diff-port-073048" [13538216-46ea-4c98-a89b-a4d0be59d38b] Running
	I1027 20:03:15.643339  466537 system_pods.go:89] "kindnet-qc8zw" [10158916-6994-4c41-ba7d-e5bd80a7fd56] Running
	I1027 20:03:15.643384  466537 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-073048" [7bb63b28-d55e-4f2f-bd52-0dbce0ee5858] Running
	I1027 20:03:15.643409  466537 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-073048" [502a218b-490e-47f8-b946-6d2df9e30913] Running
	I1027 20:03:15.643431  466537 system_pods.go:89] "kube-proxy-dsq46" [91a97ff3-0f9b-41c0-bbec-870515448861] Running
	I1027 20:03:15.643452  466537 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-073048" [b090cbea-1aa4-46ff-a144-7a93a8fdeece] Running
	I1027 20:03:15.643488  466537 system_pods.go:89] "storage-provisioner" [9fef6eaa-5369-4f2d-9dd1-f3d7074b9a77] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 20:03:15.643540  466537 retry.go:31] will retry after 235.50127ms: missing components: kube-dns
	I1027 20:03:15.928665  466537 system_pods.go:86] 8 kube-system pods found
	I1027 20:03:15.928754  466537 system_pods.go:89] "coredns-66bc5c9577-6vc9v" [5d420b85-b106-4d91-9ebd-483f8ccfa445] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:03:15.928778  466537 system_pods.go:89] "etcd-default-k8s-diff-port-073048" [13538216-46ea-4c98-a89b-a4d0be59d38b] Running
	I1027 20:03:15.928818  466537 system_pods.go:89] "kindnet-qc8zw" [10158916-6994-4c41-ba7d-e5bd80a7fd56] Running
	I1027 20:03:15.928845  466537 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-073048" [7bb63b28-d55e-4f2f-bd52-0dbce0ee5858] Running
	I1027 20:03:15.928866  466537 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-073048" [502a218b-490e-47f8-b946-6d2df9e30913] Running
	I1027 20:03:15.928888  466537 system_pods.go:89] "kube-proxy-dsq46" [91a97ff3-0f9b-41c0-bbec-870515448861] Running
	I1027 20:03:15.928922  466537 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-073048" [b090cbea-1aa4-46ff-a144-7a93a8fdeece] Running
	I1027 20:03:15.928946  466537 system_pods.go:89] "storage-provisioner" [9fef6eaa-5369-4f2d-9dd1-f3d7074b9a77] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 20:03:15.928975  466537 retry.go:31] will retry after 314.055603ms: missing components: kube-dns
	I1027 20:03:16.248749  466537 system_pods.go:86] 8 kube-system pods found
	I1027 20:03:16.254584  466537 system_pods.go:89] "coredns-66bc5c9577-6vc9v" [5d420b85-b106-4d91-9ebd-483f8ccfa445] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:03:16.254607  466537 system_pods.go:89] "etcd-default-k8s-diff-port-073048" [13538216-46ea-4c98-a89b-a4d0be59d38b] Running
	I1027 20:03:16.254618  466537 system_pods.go:89] "kindnet-qc8zw" [10158916-6994-4c41-ba7d-e5bd80a7fd56] Running
	I1027 20:03:16.254624  466537 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-073048" [7bb63b28-d55e-4f2f-bd52-0dbce0ee5858] Running
	I1027 20:03:16.254629  466537 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-073048" [502a218b-490e-47f8-b946-6d2df9e30913] Running
	I1027 20:03:16.254634  466537 system_pods.go:89] "kube-proxy-dsq46" [91a97ff3-0f9b-41c0-bbec-870515448861] Running
	I1027 20:03:16.254639  466537 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-073048" [b090cbea-1aa4-46ff-a144-7a93a8fdeece] Running
	I1027 20:03:16.254645  466537 system_pods.go:89] "storage-provisioner" [9fef6eaa-5369-4f2d-9dd1-f3d7074b9a77] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 20:03:16.254664  466537 retry.go:31] will retry after 482.287114ms: missing components: kube-dns
	I1027 20:03:16.740562  466537 system_pods.go:86] 8 kube-system pods found
	I1027 20:03:16.740604  466537 system_pods.go:89] "coredns-66bc5c9577-6vc9v" [5d420b85-b106-4d91-9ebd-483f8ccfa445] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:03:16.740612  466537 system_pods.go:89] "etcd-default-k8s-diff-port-073048" [13538216-46ea-4c98-a89b-a4d0be59d38b] Running
	I1027 20:03:16.740619  466537 system_pods.go:89] "kindnet-qc8zw" [10158916-6994-4c41-ba7d-e5bd80a7fd56] Running
	I1027 20:03:16.740624  466537 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-073048" [7bb63b28-d55e-4f2f-bd52-0dbce0ee5858] Running
	I1027 20:03:16.740629  466537 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-073048" [502a218b-490e-47f8-b946-6d2df9e30913] Running
	I1027 20:03:16.740633  466537 system_pods.go:89] "kube-proxy-dsq46" [91a97ff3-0f9b-41c0-bbec-870515448861] Running
	I1027 20:03:16.740643  466537 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-073048" [b090cbea-1aa4-46ff-a144-7a93a8fdeece] Running
	I1027 20:03:16.740650  466537 system_pods.go:89] "storage-provisioner" [9fef6eaa-5369-4f2d-9dd1-f3d7074b9a77] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 20:03:16.740671  466537 retry.go:31] will retry after 516.518808ms: missing components: kube-dns
	I1027 20:03:17.261626  466537 system_pods.go:86] 8 kube-system pods found
	I1027 20:03:17.261661  466537 system_pods.go:89] "coredns-66bc5c9577-6vc9v" [5d420b85-b106-4d91-9ebd-483f8ccfa445] Running
	I1027 20:03:17.261668  466537 system_pods.go:89] "etcd-default-k8s-diff-port-073048" [13538216-46ea-4c98-a89b-a4d0be59d38b] Running
	I1027 20:03:17.261676  466537 system_pods.go:89] "kindnet-qc8zw" [10158916-6994-4c41-ba7d-e5bd80a7fd56] Running
	I1027 20:03:17.261681  466537 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-073048" [7bb63b28-d55e-4f2f-bd52-0dbce0ee5858] Running
	I1027 20:03:17.261685  466537 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-073048" [502a218b-490e-47f8-b946-6d2df9e30913] Running
	I1027 20:03:17.261689  466537 system_pods.go:89] "kube-proxy-dsq46" [91a97ff3-0f9b-41c0-bbec-870515448861] Running
	I1027 20:03:17.261694  466537 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-073048" [b090cbea-1aa4-46ff-a144-7a93a8fdeece] Running
	I1027 20:03:17.261701  466537 system_pods.go:89] "storage-provisioner" [9fef6eaa-5369-4f2d-9dd1-f3d7074b9a77] Running
	I1027 20:03:17.261712  466537 system_pods.go:126] duration metric: took 1.621584317s to wait for k8s-apps to be running ...
	I1027 20:03:17.261728  466537 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 20:03:17.261787  466537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 20:03:17.282881  466537 system_svc.go:56] duration metric: took 21.14395ms WaitForService to wait for kubelet
	I1027 20:03:17.282910  466537 kubeadm.go:586] duration metric: took 43.660826304s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 20:03:17.282929  466537 node_conditions.go:102] verifying NodePressure condition ...
	I1027 20:03:17.286165  466537 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 20:03:17.286203  466537 node_conditions.go:123] node cpu capacity is 2
	I1027 20:03:17.286216  466537 node_conditions.go:105] duration metric: took 3.281549ms to run NodePressure ...
	I1027 20:03:17.286228  466537 start.go:241] waiting for startup goroutines ...
	I1027 20:03:17.286236  466537 start.go:246] waiting for cluster config update ...
	I1027 20:03:17.286254  466537 start.go:255] writing updated cluster config ...
	I1027 20:03:17.286560  466537 ssh_runner.go:195] Run: rm -f paused
	I1027 20:03:17.293004  466537 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 20:03:17.296662  466537 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6vc9v" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:03:17.303505  466537 pod_ready.go:94] pod "coredns-66bc5c9577-6vc9v" is "Ready"
	I1027 20:03:17.303533  466537 pod_ready.go:86] duration metric: took 6.834508ms for pod "coredns-66bc5c9577-6vc9v" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:03:17.305863  466537 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-073048" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:03:17.310685  466537 pod_ready.go:94] pod "etcd-default-k8s-diff-port-073048" is "Ready"
	I1027 20:03:17.310719  466537 pod_ready.go:86] duration metric: took 4.832497ms for pod "etcd-default-k8s-diff-port-073048" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:03:17.313103  466537 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-073048" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:03:17.317546  466537 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-073048" is "Ready"
	I1027 20:03:17.317578  466537 pod_ready.go:86] duration metric: took 4.440516ms for pod "kube-apiserver-default-k8s-diff-port-073048" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:03:17.319868  466537 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-073048" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:03:17.697637  466537 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-073048" is "Ready"
	I1027 20:03:17.697680  466537 pod_ready.go:86] duration metric: took 377.777787ms for pod "kube-controller-manager-default-k8s-diff-port-073048" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:03:17.897708  466537 pod_ready.go:83] waiting for pod "kube-proxy-dsq46" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:03:18.297085  466537 pod_ready.go:94] pod "kube-proxy-dsq46" is "Ready"
	I1027 20:03:18.297159  466537 pod_ready.go:86] duration metric: took 399.42515ms for pod "kube-proxy-dsq46" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:03:18.497927  466537 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-073048" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:03:18.897735  466537 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-073048" is "Ready"
	I1027 20:03:18.897762  466537 pod_ready.go:86] duration metric: took 399.768811ms for pod "kube-scheduler-default-k8s-diff-port-073048" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:03:18.897776  466537 pod_ready.go:40] duration metric: took 1.604735913s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 20:03:18.976577  466537 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 20:03:18.980080  466537 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-073048" cluster and "default" namespace by default
	I1027 20:03:16.103784  470518 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.217753414s
	I1027 20:03:18.665470  470518 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.780574885s
	I1027 20:03:20.386640  470518 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.501682037s
	I1027 20:03:20.409932  470518 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 20:03:20.423437  470518 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 20:03:20.441572  470518 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 20:03:20.441781  470518 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-702588 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 20:03:20.454775  470518 kubeadm.go:318] [bootstrap-token] Using token: mzl0zn.i55e3nx87rh2mbwp
	I1027 20:03:20.458078  470518 out.go:252]   - Configuring RBAC rules ...
	I1027 20:03:20.458204  470518 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 20:03:20.470453  470518 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 20:03:20.478853  470518 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 20:03:20.484430  470518 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 20:03:20.489111  470518 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 20:03:20.495563  470518 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 20:03:20.793244  470518 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 20:03:21.228645  470518 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1027 20:03:21.793541  470518 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1027 20:03:21.794622  470518 kubeadm.go:318] 
	I1027 20:03:21.794695  470518 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1027 20:03:21.794709  470518 kubeadm.go:318] 
	I1027 20:03:21.794786  470518 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1027 20:03:21.794794  470518 kubeadm.go:318] 
	I1027 20:03:21.794819  470518 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1027 20:03:21.794881  470518 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 20:03:21.794934  470518 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 20:03:21.794943  470518 kubeadm.go:318] 
	I1027 20:03:21.795024  470518 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1027 20:03:21.795034  470518 kubeadm.go:318] 
	I1027 20:03:21.795081  470518 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 20:03:21.795085  470518 kubeadm.go:318] 
	I1027 20:03:21.795137  470518 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1027 20:03:21.795216  470518 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 20:03:21.795284  470518 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 20:03:21.795289  470518 kubeadm.go:318] 
	I1027 20:03:21.795371  470518 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 20:03:21.795447  470518 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1027 20:03:21.795452  470518 kubeadm.go:318] 
	I1027 20:03:21.795535  470518 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token mzl0zn.i55e3nx87rh2mbwp \
	I1027 20:03:21.795660  470518 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:9473f077605f6866eb08c8a712af7c56a0deb8dc2a907ec8cbb4b8ae396e8f1f \
	I1027 20:03:21.795686  470518 kubeadm.go:318] 	--control-plane 
	I1027 20:03:21.795691  470518 kubeadm.go:318] 
	I1027 20:03:21.795774  470518 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1027 20:03:21.795778  470518 kubeadm.go:318] 
	I1027 20:03:21.795858  470518 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token mzl0zn.i55e3nx87rh2mbwp \
	I1027 20:03:21.795959  470518 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:9473f077605f6866eb08c8a712af7c56a0deb8dc2a907ec8cbb4b8ae396e8f1f 
	I1027 20:03:21.800525  470518 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1027 20:03:21.800774  470518 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1027 20:03:21.800920  470518 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 20:03:21.800939  470518 cni.go:84] Creating CNI manager for ""
	I1027 20:03:21.800947  470518 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 20:03:21.804256  470518 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1027 20:03:21.808092  470518 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1027 20:03:21.812849  470518 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 20:03:21.812877  470518 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1027 20:03:21.827464  470518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 20:03:22.159276  470518 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 20:03:22.159358  470518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:03:22.159408  470518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-702588 minikube.k8s.io/updated_at=2025_10_27T20_03_22_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f minikube.k8s.io/name=newest-cni-702588 minikube.k8s.io/primary=true
	I1027 20:03:22.331502  470518 ops.go:34] apiserver oom_adj: -16
	I1027 20:03:22.331604  470518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:03:22.832139  470518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:03:23.331776  470518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:03:23.832212  470518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:03:24.332196  470518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:03:24.832188  470518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:03:25.332355  470518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:03:25.831912  470518 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:03:25.934837  470518 kubeadm.go:1113] duration metric: took 3.77554406s to wait for elevateKubeSystemPrivileges
	I1027 20:03:25.934864  470518 kubeadm.go:402] duration metric: took 25.754783149s to StartCluster
	I1027 20:03:25.934891  470518 settings.go:142] acquiring lock: {Name:mk49b9e2e9b7901c6b51e89b8b1e8ee7f9e88107 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:03:25.934951  470518 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 20:03:25.936013  470518 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/kubeconfig: {Name:mk0a03a594f8d9f9199b1c7cff7c550cec8414c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:03:25.936238  470518 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 20:03:25.936392  470518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 20:03:25.936579  470518 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 20:03:25.936649  470518 config.go:182] Loaded profile config "newest-cni-702588": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:03:25.936655  470518 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-702588"
	I1027 20:03:25.936679  470518 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-702588"
	I1027 20:03:25.936682  470518 addons.go:69] Setting default-storageclass=true in profile "newest-cni-702588"
	I1027 20:03:25.936695  470518 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-702588"
	I1027 20:03:25.936706  470518 host.go:66] Checking if "newest-cni-702588" exists ...
	I1027 20:03:25.937000  470518 cli_runner.go:164] Run: docker container inspect newest-cni-702588 --format={{.State.Status}}
	I1027 20:03:25.937156  470518 cli_runner.go:164] Run: docker container inspect newest-cni-702588 --format={{.State.Status}}
	I1027 20:03:25.940880  470518 out.go:179] * Verifying Kubernetes components...
	I1027 20:03:25.943729  470518 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:03:25.974806  470518 addons.go:238] Setting addon default-storageclass=true in "newest-cni-702588"
	I1027 20:03:25.974845  470518 host.go:66] Checking if "newest-cni-702588" exists ...
	I1027 20:03:25.975286  470518 cli_runner.go:164] Run: docker container inspect newest-cni-702588 --format={{.State.Status}}
	I1027 20:03:25.986005  470518 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 20:03:25.991152  470518 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 20:03:25.991188  470518 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 20:03:25.991259  470518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-702588
	I1027 20:03:26.009986  470518 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 20:03:26.010024  470518 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 20:03:26.010089  470518 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-702588
	I1027 20:03:26.047558  470518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/newest-cni-702588/id_rsa Username:docker}
	I1027 20:03:26.049020  470518 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/newest-cni-702588/id_rsa Username:docker}
	I1027 20:03:26.214132  470518 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 20:03:26.246823  470518 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 20:03:26.293737  470518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 20:03:26.348863  470518 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 20:03:26.793545  470518 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1027 20:03:26.795562  470518 api_server.go:52] waiting for apiserver process to appear ...
	I1027 20:03:26.795638  470518 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 20:03:27.105561  470518 api_server.go:72] duration metric: took 1.169298134s to wait for apiserver process to appear ...
	I1027 20:03:27.105582  470518 api_server.go:88] waiting for apiserver healthz status ...
	I1027 20:03:27.105610  470518 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 20:03:27.120716  470518 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1027 20:03:27.122229  470518 api_server.go:141] control plane version: v1.34.1
	I1027 20:03:27.122256  470518 api_server.go:131] duration metric: took 16.665708ms to wait for apiserver health ...
	I1027 20:03:27.122266  470518 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 20:03:27.122631  470518 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1027 20:03:27.125474  470518 addons.go:514] duration metric: took 1.188884695s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1027 20:03:27.126448  470518 system_pods.go:59] 8 kube-system pods found
	I1027 20:03:27.126483  470518 system_pods.go:61] "coredns-66bc5c9577-xclwd" [eee638fa-65a2-4c75-ba2c-7615f09c51da] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1027 20:03:27.126492  470518 system_pods.go:61] "etcd-newest-cni-702588" [84702404-c34c-450f-a8c7-f94b0088ac21] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 20:03:27.126501  470518 system_pods.go:61] "kindnet-7ctmm" [98e70164-cd51-4563-91d0-7c0bae3c2ade] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1027 20:03:27.126513  470518 system_pods.go:61] "kube-apiserver-newest-cni-702588" [e508c926-b287-4ae8-83a6-a1a4360c85f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 20:03:27.126525  470518 system_pods.go:61] "kube-controller-manager-newest-cni-702588" [01fa6132-66de-422f-bbd3-2c1e46280199] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 20:03:27.126532  470518 system_pods.go:61] "kube-proxy-k9lhg" [f36ed32e-d331-485d-ba07-01353f65e231] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1027 20:03:27.126541  470518 system_pods.go:61] "kube-scheduler-newest-cni-702588" [6089c80f-86d4-4837-9eaf-2e473ed151d5] Running
	I1027 20:03:27.126547  470518 system_pods.go:61] "storage-provisioner" [9074befc-b06a-4ae1-8cf5-5544c94b2e07] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1027 20:03:27.126552  470518 system_pods.go:74] duration metric: took 4.281242ms to wait for pod list to return data ...
	I1027 20:03:27.126559  470518 default_sa.go:34] waiting for default service account to be created ...
	I1027 20:03:27.128894  470518 default_sa.go:45] found service account: "default"
	I1027 20:03:27.128917  470518 default_sa.go:55] duration metric: took 2.352237ms for default service account to be created ...
	I1027 20:03:27.128927  470518 kubeadm.go:586] duration metric: took 1.192668164s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1027 20:03:27.128942  470518 node_conditions.go:102] verifying NodePressure condition ...
	I1027 20:03:27.131355  470518 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 20:03:27.131386  470518 node_conditions.go:123] node cpu capacity is 2
	I1027 20:03:27.131398  470518 node_conditions.go:105] duration metric: took 2.449942ms to run NodePressure ...
	I1027 20:03:27.131409  470518 start.go:241] waiting for startup goroutines ...
	I1027 20:03:27.299797  470518 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-702588" context rescaled to 1 replicas
	I1027 20:03:27.299831  470518 start.go:246] waiting for cluster config update ...
	I1027 20:03:27.299842  470518 start.go:255] writing updated cluster config ...
	I1027 20:03:27.300139  470518 ssh_runner.go:195] Run: rm -f paused
	I1027 20:03:27.413635  470518 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 20:03:27.416916  470518 out.go:179] * Done! kubectl is now configured to use "newest-cni-702588" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 27 20:03:26 newest-cni-702588 crio[839]: time="2025-10-27T20:03:26.909961837Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:03:26 newest-cni-702588 crio[839]: time="2025-10-27T20:03:26.913853034Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=7e166b28-fac5-4dde-a2fd-846d4414cfb3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 20:03:26 newest-cni-702588 crio[839]: time="2025-10-27T20:03:26.918279003Z" level=info msg="Ran pod sandbox f83dacfee193a7aaf04da67be4c4d9322f4f0572afccbf5fe79625bea5bdf5e6 with infra container: kube-system/kindnet-7ctmm/POD" id=7e166b28-fac5-4dde-a2fd-846d4414cfb3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 20:03:26 newest-cni-702588 crio[839]: time="2025-10-27T20:03:26.919734388Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=cc18d9f1-87cb-4f34-88c9-9ca961bd0579 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 20:03:26 newest-cni-702588 crio[839]: time="2025-10-27T20:03:26.921072197Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=d1e75be1-2118-4d9b-9cce-5c208d3b43d0 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 20:03:26 newest-cni-702588 crio[839]: time="2025-10-27T20:03:26.926965858Z" level=info msg="Creating container: kube-system/kindnet-7ctmm/kindnet-cni" id=f141a9cc-fc72-4659-84eb-7b6d90fa304e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 20:03:26 newest-cni-702588 crio[839]: time="2025-10-27T20:03:26.927340444Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:03:26 newest-cni-702588 crio[839]: time="2025-10-27T20:03:26.94576758Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:03:26 newest-cni-702588 crio[839]: time="2025-10-27T20:03:26.946368269Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:03:26 newest-cni-702588 crio[839]: time="2025-10-27T20:03:26.977834557Z" level=info msg="Created container 3e8407f7dbb14010fe6a80012b8ba653aecffbbd664f8709eb8a5dcaad212e4d: kube-system/kindnet-7ctmm/kindnet-cni" id=f141a9cc-fc72-4659-84eb-7b6d90fa304e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 20:03:26 newest-cni-702588 crio[839]: time="2025-10-27T20:03:26.978794514Z" level=info msg="Starting container: 3e8407f7dbb14010fe6a80012b8ba653aecffbbd664f8709eb8a5dcaad212e4d" id=ac91c079-6142-4050-bec5-cf5113d0da92 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 20:03:26 newest-cni-702588 crio[839]: time="2025-10-27T20:03:26.980995027Z" level=info msg="Started container" PID=1503 containerID=3e8407f7dbb14010fe6a80012b8ba653aecffbbd664f8709eb8a5dcaad212e4d description=kube-system/kindnet-7ctmm/kindnet-cni id=ac91c079-6142-4050-bec5-cf5113d0da92 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f83dacfee193a7aaf04da67be4c4d9322f4f0572afccbf5fe79625bea5bdf5e6
	Oct 27 20:03:27 newest-cni-702588 crio[839]: time="2025-10-27T20:03:27.860363812Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-k9lhg/POD" id=2bb44d62-1fd1-44b0-ae07-3a002fba02ac name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 20:03:27 newest-cni-702588 crio[839]: time="2025-10-27T20:03:27.860423338Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:03:27 newest-cni-702588 crio[839]: time="2025-10-27T20:03:27.866771452Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=2bb44d62-1fd1-44b0-ae07-3a002fba02ac name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 20:03:27 newest-cni-702588 crio[839]: time="2025-10-27T20:03:27.881031886Z" level=info msg="Ran pod sandbox 1be3c3e7bcc4f34946088813c3deda4a10057f1f6a1a688c30532822acf1dbb9 with infra container: kube-system/kube-proxy-k9lhg/POD" id=2bb44d62-1fd1-44b0-ae07-3a002fba02ac name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 20:03:27 newest-cni-702588 crio[839]: time="2025-10-27T20:03:27.882379081Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=38efb954-5d4a-434d-8576-ad762a219c85 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 20:03:27 newest-cni-702588 crio[839]: time="2025-10-27T20:03:27.883910132Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=112945bf-3e2b-4374-b232-7b322e673030 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 20:03:27 newest-cni-702588 crio[839]: time="2025-10-27T20:03:27.888923793Z" level=info msg="Creating container: kube-system/kube-proxy-k9lhg/kube-proxy" id=50a45f1c-e57b-4960-a150-d687b087e50f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 20:03:27 newest-cni-702588 crio[839]: time="2025-10-27T20:03:27.889190477Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:03:27 newest-cni-702588 crio[839]: time="2025-10-27T20:03:27.908443108Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:03:27 newest-cni-702588 crio[839]: time="2025-10-27T20:03:27.909817035Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:03:27 newest-cni-702588 crio[839]: time="2025-10-27T20:03:27.942124513Z" level=info msg="Created container 45847332dae8fe097eeea60d56b9b367123bf09142e7e676331c64eee01cd9da: kube-system/kube-proxy-k9lhg/kube-proxy" id=50a45f1c-e57b-4960-a150-d687b087e50f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 20:03:27 newest-cni-702588 crio[839]: time="2025-10-27T20:03:27.943857396Z" level=info msg="Starting container: 45847332dae8fe097eeea60d56b9b367123bf09142e7e676331c64eee01cd9da" id=c8c92ebd-fbf2-40fd-855b-86a1f5c9c142 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 20:03:27 newest-cni-702588 crio[839]: time="2025-10-27T20:03:27.94810225Z" level=info msg="Started container" PID=1575 containerID=45847332dae8fe097eeea60d56b9b367123bf09142e7e676331c64eee01cd9da description=kube-system/kube-proxy-k9lhg/kube-proxy id=c8c92ebd-fbf2-40fd-855b-86a1f5c9c142 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1be3c3e7bcc4f34946088813c3deda4a10057f1f6a1a688c30532822acf1dbb9
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	45847332dae8f       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   1 second ago        Running             kube-proxy                0                   1be3c3e7bcc4f       kube-proxy-k9lhg                            kube-system
	3e8407f7dbb14       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   2 seconds ago       Running             kindnet-cni               0                   f83dacfee193a       kindnet-7ctmm                               kube-system
	47aada1793848       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   15 seconds ago      Running             kube-controller-manager   0                   aa966492e773c       kube-controller-manager-newest-cni-702588   kube-system
	5bff9b32e6ab7       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   16 seconds ago      Running             kube-apiserver            0                   6f7c80cfbe7c3       kube-apiserver-newest-cni-702588            kube-system
	bcbea17fb51c4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   16 seconds ago      Running             kube-scheduler            0                   81f3308c80a9d       kube-scheduler-newest-cni-702588            kube-system
	1372ab731307e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   16 seconds ago      Running             etcd                      0                   5f7937d308897       etcd-newest-cni-702588                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-702588
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-702588
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=newest-cni-702588
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T20_03_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 20:03:18 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-702588
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 20:03:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 20:03:21 +0000   Mon, 27 Oct 2025 20:03:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 20:03:21 +0000   Mon, 27 Oct 2025 20:03:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 20:03:21 +0000   Mon, 27 Oct 2025 20:03:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 27 Oct 2025 20:03:21 +0000   Mon, 27 Oct 2025 20:03:13 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-702588
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                4ba06414-4234-4b37-9dae-dda0eb66f304
	  Boot ID:                    23ea05b4-8203-4fb9-a84a-5deb4b091cbb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-702588                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8s
	  kube-system                 kindnet-7ctmm                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-702588             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-702588    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-k9lhg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-702588             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 1s                 kube-proxy       
	  Warning  CgroupV1                 17s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  17s (x9 over 17s)  kubelet          Node newest-cni-702588 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17s (x8 over 17s)  kubelet          Node newest-cni-702588 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17s (x7 over 17s)  kubelet          Node newest-cni-702588 status is now: NodeHasSufficientPID
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 8s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8s                 kubelet          Node newest-cni-702588 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s                 kubelet          Node newest-cni-702588 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s                 kubelet          Node newest-cni-702588 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s                 node-controller  Node newest-cni-702588 event: Registered Node newest-cni-702588 in Controller
	
	
	==> dmesg <==
	[Oct27 19:39] overlayfs: idmapped layers are currently not supported
	[Oct27 19:40] overlayfs: idmapped layers are currently not supported
	[  +7.015891] overlayfs: idmapped layers are currently not supported
	[Oct27 19:41] overlayfs: idmapped layers are currently not supported
	[ +28.455216] overlayfs: idmapped layers are currently not supported
	[Oct27 19:42] overlayfs: idmapped layers are currently not supported
	[ +26.490174] overlayfs: idmapped layers are currently not supported
	[Oct27 19:44] overlayfs: idmapped layers are currently not supported
	[Oct27 19:45] overlayfs: idmapped layers are currently not supported
	[Oct27 19:47] overlayfs: idmapped layers are currently not supported
	[Oct27 19:49] overlayfs: idmapped layers are currently not supported
	[ +31.410335] overlayfs: idmapped layers are currently not supported
	[Oct27 19:51] overlayfs: idmapped layers are currently not supported
	[Oct27 19:53] overlayfs: idmapped layers are currently not supported
	[Oct27 19:54] overlayfs: idmapped layers are currently not supported
	[Oct27 19:55] overlayfs: idmapped layers are currently not supported
	[Oct27 19:56] overlayfs: idmapped layers are currently not supported
	[Oct27 19:57] overlayfs: idmapped layers are currently not supported
	[Oct27 19:58] overlayfs: idmapped layers are currently not supported
	[Oct27 19:59] overlayfs: idmapped layers are currently not supported
	[Oct27 20:00] overlayfs: idmapped layers are currently not supported
	[ +41.321877] overlayfs: idmapped layers are currently not supported
	[Oct27 20:01] overlayfs: idmapped layers are currently not supported
	[Oct27 20:02] overlayfs: idmapped layers are currently not supported
	[Oct27 20:03] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [1372ab731307ea1e9aa9bd46421d45408e803413e35f2b11708754d7b22236c0] <==
	{"level":"warn","ts":"2025-10-27T20:03:17.055152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:17.075407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:17.096446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:17.114346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:17.126960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:17.143592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:17.180915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:17.183686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:17.202555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:17.236377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:17.272079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:17.279893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:17.316171Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:17.335512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:17.349277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:17.367599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:17.389768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:17.404458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:17.423514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:17.443262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:17.476617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:17.511864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:17.522457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:17.544460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:17.633644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52498","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:03:29 up  2:46,  0 user,  load average: 4.02, 3.17, 2.72
	Linux newest-cni-702588 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3e8407f7dbb14010fe6a80012b8ba653aecffbbd664f8709eb8a5dcaad212e4d] <==
	I1027 20:03:27.117814       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 20:03:27.118179       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1027 20:03:27.120063       1 main.go:148] setting mtu 1500 for CNI 
	I1027 20:03:27.120152       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 20:03:27.120221       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T20:03:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 20:03:27.345954       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 20:03:27.346027       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 20:03:27.346060       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 20:03:27.346823       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [5bff9b32e6ab75584230a71ce220ab1b366fa46b82c949b31a957d6ab8394e31] <==
	I1027 20:03:18.703327       1 policy_source.go:240] refreshing policies
	E1027 20:03:18.704110       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1027 20:03:18.753486       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 20:03:18.806228       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 20:03:18.806345       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1027 20:03:18.811451       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 20:03:18.814113       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 20:03:18.893315       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 20:03:19.356298       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1027 20:03:19.363506       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1027 20:03:19.363531       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 20:03:20.215017       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 20:03:20.268161       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 20:03:20.369097       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1027 20:03:20.376180       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1027 20:03:20.377284       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 20:03:20.382426       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 20:03:20.568248       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 20:03:21.205762       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 20:03:21.227291       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1027 20:03:21.239981       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1027 20:03:26.233868       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 20:03:26.243702       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 20:03:26.506504       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 20:03:26.546141       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [47aada1793848682b0e3b6bd80539295812bc206d0d6481a8307bc2e1fb07727] <==
	I1027 20:03:25.616081       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 20:03:25.619187       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 20:03:25.619197       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1027 20:03:25.616096       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 20:03:25.616106       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1027 20:03:25.616119       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 20:03:25.616128       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1027 20:03:25.616137       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1027 20:03:25.618176       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 20:03:25.618195       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1027 20:03:25.624948       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 20:03:25.624999       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1027 20:03:25.625056       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 20:03:25.629711       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 20:03:25.631100       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 20:03:25.639620       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 20:03:25.646800       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1027 20:03:25.658235       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1027 20:03:25.665053       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1027 20:03:25.667377       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1027 20:03:25.667389       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1027 20:03:25.667440       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1027 20:03:25.668677       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1027 20:03:25.675917       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 20:03:25.694308       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [45847332dae8fe097eeea60d56b9b367123bf09142e7e676331c64eee01cd9da] <==
	I1027 20:03:28.030491       1 server_linux.go:53] "Using iptables proxy"
	I1027 20:03:28.141447       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 20:03:28.253652       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 20:03:28.253699       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1027 20:03:28.253768       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 20:03:28.343750       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 20:03:28.343806       1 server_linux.go:132] "Using iptables Proxier"
	I1027 20:03:28.375984       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 20:03:28.376517       1 server.go:527] "Version info" version="v1.34.1"
	I1027 20:03:28.376532       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 20:03:28.377695       1 config.go:106] "Starting endpoint slice config controller"
	I1027 20:03:28.377708       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 20:03:28.378008       1 config.go:200] "Starting service config controller"
	I1027 20:03:28.378016       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 20:03:28.378278       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 20:03:28.378284       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 20:03:28.378676       1 config.go:309] "Starting node config controller"
	I1027 20:03:28.378683       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 20:03:28.378688       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 20:03:28.478630       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 20:03:28.478673       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 20:03:28.478718       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [bcbea17fb51c4085fb294cf591afd5f2a925a409d14fdcd2c8dc7014d41ddded] <==
	E1027 20:03:18.658325       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 20:03:18.658548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 20:03:18.662697       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 20:03:18.664216       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 20:03:18.664495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 20:03:18.664597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 20:03:18.664689       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 20:03:18.666212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 20:03:18.666976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 20:03:18.667037       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1027 20:03:18.667082       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 20:03:18.667119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 20:03:18.667177       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 20:03:19.474879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 20:03:19.482307       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 20:03:19.582044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 20:03:19.647241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1027 20:03:19.676519       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 20:03:19.749543       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 20:03:19.840405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 20:03:19.847224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 20:03:19.848601       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 20:03:19.852109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 20:03:19.853528       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1027 20:03:22.445024       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 20:03:22 newest-cni-702588 kubelet[1325]: I1027 20:03:22.271969    1325 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-702588"
	Oct 27 20:03:22 newest-cni-702588 kubelet[1325]: I1027 20:03:22.273572    1325 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-702588"
	Oct 27 20:03:22 newest-cni-702588 kubelet[1325]: I1027 20:03:22.274038    1325 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-702588"
	Oct 27 20:03:22 newest-cni-702588 kubelet[1325]: E1027 20:03:22.294947    1325 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-702588\" already exists" pod="kube-system/kube-scheduler-newest-cni-702588"
	Oct 27 20:03:22 newest-cni-702588 kubelet[1325]: E1027 20:03:22.295468    1325 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-702588\" already exists" pod="kube-system/kube-controller-manager-newest-cni-702588"
	Oct 27 20:03:22 newest-cni-702588 kubelet[1325]: E1027 20:03:22.295772    1325 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-702588\" already exists" pod="kube-system/etcd-newest-cni-702588"
	Oct 27 20:03:22 newest-cni-702588 kubelet[1325]: I1027 20:03:22.339945    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-702588" podStartSLOduration=1.3399263000000001 podStartE2EDuration="1.3399263s" podCreationTimestamp="2025-10-27 20:03:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 20:03:22.321771722 +0000 UTC m=+1.278346897" watchObservedRunningTime="2025-10-27 20:03:22.3399263 +0000 UTC m=+1.296501458"
	Oct 27 20:03:22 newest-cni-702588 kubelet[1325]: I1027 20:03:22.353987    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-702588" podStartSLOduration=1.353968525 podStartE2EDuration="1.353968525s" podCreationTimestamp="2025-10-27 20:03:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 20:03:22.3402324 +0000 UTC m=+1.296807559" watchObservedRunningTime="2025-10-27 20:03:22.353968525 +0000 UTC m=+1.310543683"
	Oct 27 20:03:22 newest-cni-702588 kubelet[1325]: I1027 20:03:22.371474    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-702588" podStartSLOduration=1.371453976 podStartE2EDuration="1.371453976s" podCreationTimestamp="2025-10-27 20:03:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 20:03:22.354255598 +0000 UTC m=+1.310830773" watchObservedRunningTime="2025-10-27 20:03:22.371453976 +0000 UTC m=+1.328029135"
	Oct 27 20:03:25 newest-cni-702588 kubelet[1325]: I1027 20:03:25.636702    1325 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 27 20:03:25 newest-cni-702588 kubelet[1325]: I1027 20:03:25.637262    1325 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 27 20:03:25 newest-cni-702588 kubelet[1325]: I1027 20:03:25.731262    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-702588" podStartSLOduration=4.731234699 podStartE2EDuration="4.731234699s" podCreationTimestamp="2025-10-27 20:03:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 20:03:22.371836784 +0000 UTC m=+1.328411943" watchObservedRunningTime="2025-10-27 20:03:25.731234699 +0000 UTC m=+4.687809858"
	Oct 27 20:03:26 newest-cni-702588 kubelet[1325]: E1027 20:03:26.661773    1325 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:newest-cni-702588\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'newest-cni-702588' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 27 20:03:26 newest-cni-702588 kubelet[1325]: I1027 20:03:26.709268    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/98e70164-cd51-4563-91d0-7c0bae3c2ade-cni-cfg\") pod \"kindnet-7ctmm\" (UID: \"98e70164-cd51-4563-91d0-7c0bae3c2ade\") " pod="kube-system/kindnet-7ctmm"
	Oct 27 20:03:26 newest-cni-702588 kubelet[1325]: I1027 20:03:26.709385    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98e70164-cd51-4563-91d0-7c0bae3c2ade-xtables-lock\") pod \"kindnet-7ctmm\" (UID: \"98e70164-cd51-4563-91d0-7c0bae3c2ade\") " pod="kube-system/kindnet-7ctmm"
	Oct 27 20:03:26 newest-cni-702588 kubelet[1325]: I1027 20:03:26.709409    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n29rc\" (UniqueName: \"kubernetes.io/projected/98e70164-cd51-4563-91d0-7c0bae3c2ade-kube-api-access-n29rc\") pod \"kindnet-7ctmm\" (UID: \"98e70164-cd51-4563-91d0-7c0bae3c2ade\") " pod="kube-system/kindnet-7ctmm"
	Oct 27 20:03:26 newest-cni-702588 kubelet[1325]: I1027 20:03:26.709456    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f36ed32e-d331-485d-ba07-01353f65e231-xtables-lock\") pod \"kube-proxy-k9lhg\" (UID: \"f36ed32e-d331-485d-ba07-01353f65e231\") " pod="kube-system/kube-proxy-k9lhg"
	Oct 27 20:03:26 newest-cni-702588 kubelet[1325]: I1027 20:03:26.709478    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f36ed32e-d331-485d-ba07-01353f65e231-kube-proxy\") pod \"kube-proxy-k9lhg\" (UID: \"f36ed32e-d331-485d-ba07-01353f65e231\") " pod="kube-system/kube-proxy-k9lhg"
	Oct 27 20:03:26 newest-cni-702588 kubelet[1325]: I1027 20:03:26.709496    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98e70164-cd51-4563-91d0-7c0bae3c2ade-lib-modules\") pod \"kindnet-7ctmm\" (UID: \"98e70164-cd51-4563-91d0-7c0bae3c2ade\") " pod="kube-system/kindnet-7ctmm"
	Oct 27 20:03:26 newest-cni-702588 kubelet[1325]: I1027 20:03:26.709620    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f36ed32e-d331-485d-ba07-01353f65e231-lib-modules\") pod \"kube-proxy-k9lhg\" (UID: \"f36ed32e-d331-485d-ba07-01353f65e231\") " pod="kube-system/kube-proxy-k9lhg"
	Oct 27 20:03:26 newest-cni-702588 kubelet[1325]: I1027 20:03:26.709638    1325 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctkjc\" (UniqueName: \"kubernetes.io/projected/f36ed32e-d331-485d-ba07-01353f65e231-kube-api-access-ctkjc\") pod \"kube-proxy-k9lhg\" (UID: \"f36ed32e-d331-485d-ba07-01353f65e231\") " pod="kube-system/kube-proxy-k9lhg"
	Oct 27 20:03:26 newest-cni-702588 kubelet[1325]: I1027 20:03:26.845080    1325 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 27 20:03:26 newest-cni-702588 kubelet[1325]: W1027 20:03:26.916401    1325 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/129b04b839d9dc50e5b41cb4e74b918aa12b2f3b4e01d95ffdf708dd0ebe16e9/crio-f83dacfee193a7aaf04da67be4c4d9322f4f0572afccbf5fe79625bea5bdf5e6 WatchSource:0}: Error finding container f83dacfee193a7aaf04da67be4c4d9322f4f0572afccbf5fe79625bea5bdf5e6: Status 404 returned error can't find the container with id f83dacfee193a7aaf04da67be4c4d9322f4f0572afccbf5fe79625bea5bdf5e6
	Oct 27 20:03:27 newest-cni-702588 kubelet[1325]: W1027 20:03:27.878695    1325 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/129b04b839d9dc50e5b41cb4e74b918aa12b2f3b4e01d95ffdf708dd0ebe16e9/crio-1be3c3e7bcc4f34946088813c3deda4a10057f1f6a1a688c30532822acf1dbb9 WatchSource:0}: Error finding container 1be3c3e7bcc4f34946088813c3deda4a10057f1f6a1a688c30532822acf1dbb9: Status 404 returned error can't find the container with id 1be3c3e7bcc4f34946088813c3deda4a10057f1f6a1a688c30532822acf1dbb9
	Oct 27 20:03:28 newest-cni-702588 kubelet[1325]: I1027 20:03:28.327609    1325 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-7ctmm" podStartSLOduration=2.327582762 podStartE2EDuration="2.327582762s" podCreationTimestamp="2025-10-27 20:03:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 20:03:27.3203142 +0000 UTC m=+6.276889375" watchObservedRunningTime="2025-10-27 20:03:28.327582762 +0000 UTC m=+7.284158011"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-702588 -n newest-cni-702588
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-702588 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-xclwd storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-702588 describe pod coredns-66bc5c9577-xclwd storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-702588 describe pod coredns-66bc5c9577-xclwd storage-provisioner: exit status 1 (144.935568ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-xclwd" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-702588 describe pod coredns-66bc5c9577-xclwd storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (8.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-702588 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-702588 --alsologtostderr -v=1: exit status 80 (2.419885511s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-702588 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 20:03:49.313068  476969 out.go:360] Setting OutFile to fd 1 ...
	I1027 20:03:49.313193  476969 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:03:49.313209  476969 out.go:374] Setting ErrFile to fd 2...
	I1027 20:03:49.313216  476969 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:03:49.313464  476969 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 20:03:49.313738  476969 out.go:368] Setting JSON to false
	I1027 20:03:49.313765  476969 mustload.go:65] Loading cluster: newest-cni-702588
	I1027 20:03:49.314163  476969 config.go:182] Loaded profile config "newest-cni-702588": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:03:49.314632  476969 cli_runner.go:164] Run: docker container inspect newest-cni-702588 --format={{.State.Status}}
	I1027 20:03:49.336982  476969 host.go:66] Checking if "newest-cni-702588" exists ...
	I1027 20:03:49.337370  476969 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 20:03:49.437850  476969 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-27 20:03:49.42639069 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 20:03:49.438501  476969 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-702588 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1027 20:03:49.441878  476969 out.go:179] * Pausing node newest-cni-702588 ... 
	I1027 20:03:49.445609  476969 host.go:66] Checking if "newest-cni-702588" exists ...
	I1027 20:03:49.445937  476969 ssh_runner.go:195] Run: systemctl --version
	I1027 20:03:49.445981  476969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-702588
	I1027 20:03:49.494811  476969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/newest-cni-702588/id_rsa Username:docker}
	I1027 20:03:49.619551  476969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 20:03:49.644437  476969 pause.go:52] kubelet running: true
	I1027 20:03:49.644515  476969 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 20:03:49.957369  476969 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 20:03:49.957462  476969 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 20:03:50.076365  476969 cri.go:89] found id: "a70a228324365c1e561d8cfe7dd7a2028a7e30509835bde732e44b3427b70131"
	I1027 20:03:50.076393  476969 cri.go:89] found id: "20f5ab471ade7a086d7d0e7f822d761ca44030cdc590decc67a3adffd147ad8f"
	I1027 20:03:50.076398  476969 cri.go:89] found id: "50e1303f4abc6c0afd542eb75bdf66db50d8a97bbe1ab2251b60144ee40ebbdf"
	I1027 20:03:50.076402  476969 cri.go:89] found id: "8d587d21ae021230e28963c8c71ea231d0d95d971ef978ac498061c57511609b"
	I1027 20:03:50.076405  476969 cri.go:89] found id: "72abb4993a475217f2a8c95e18b5ddddfda40e2dc2e0cf42b38d6d9ef04c3a63"
	I1027 20:03:50.076408  476969 cri.go:89] found id: "f8fab5749ac40e20fd27699d7357e78ed023c1efecb49166aa43e7474e86b557"
	I1027 20:03:50.076411  476969 cri.go:89] found id: ""
	I1027 20:03:50.076462  476969 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 20:03:50.098562  476969 retry.go:31] will retry after 319.677881ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T20:03:50Z" level=error msg="open /run/runc: no such file or directory"
	I1027 20:03:50.419118  476969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 20:03:50.449804  476969 pause.go:52] kubelet running: false
	I1027 20:03:50.449869  476969 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 20:03:50.654868  476969 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 20:03:50.654939  476969 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 20:03:50.748475  476969 cri.go:89] found id: "a70a228324365c1e561d8cfe7dd7a2028a7e30509835bde732e44b3427b70131"
	I1027 20:03:50.748494  476969 cri.go:89] found id: "20f5ab471ade7a086d7d0e7f822d761ca44030cdc590decc67a3adffd147ad8f"
	I1027 20:03:50.748499  476969 cri.go:89] found id: "50e1303f4abc6c0afd542eb75bdf66db50d8a97bbe1ab2251b60144ee40ebbdf"
	I1027 20:03:50.748505  476969 cri.go:89] found id: "8d587d21ae021230e28963c8c71ea231d0d95d971ef978ac498061c57511609b"
	I1027 20:03:50.748508  476969 cri.go:89] found id: "72abb4993a475217f2a8c95e18b5ddddfda40e2dc2e0cf42b38d6d9ef04c3a63"
	I1027 20:03:50.748512  476969 cri.go:89] found id: "f8fab5749ac40e20fd27699d7357e78ed023c1efecb49166aa43e7474e86b557"
	I1027 20:03:50.748515  476969 cri.go:89] found id: ""
	I1027 20:03:50.748565  476969 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 20:03:50.760657  476969 retry.go:31] will retry after 543.528107ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T20:03:50Z" level=error msg="open /run/runc: no such file or directory"
	I1027 20:03:51.305008  476969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 20:03:51.319893  476969 pause.go:52] kubelet running: false
	I1027 20:03:51.319961  476969 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 20:03:51.503295  476969 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 20:03:51.503388  476969 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 20:03:51.608626  476969 cri.go:89] found id: "a70a228324365c1e561d8cfe7dd7a2028a7e30509835bde732e44b3427b70131"
	I1027 20:03:51.608654  476969 cri.go:89] found id: "20f5ab471ade7a086d7d0e7f822d761ca44030cdc590decc67a3adffd147ad8f"
	I1027 20:03:51.608659  476969 cri.go:89] found id: "50e1303f4abc6c0afd542eb75bdf66db50d8a97bbe1ab2251b60144ee40ebbdf"
	I1027 20:03:51.608664  476969 cri.go:89] found id: "8d587d21ae021230e28963c8c71ea231d0d95d971ef978ac498061c57511609b"
	I1027 20:03:51.608667  476969 cri.go:89] found id: "72abb4993a475217f2a8c95e18b5ddddfda40e2dc2e0cf42b38d6d9ef04c3a63"
	I1027 20:03:51.608671  476969 cri.go:89] found id: "f8fab5749ac40e20fd27699d7357e78ed023c1efecb49166aa43e7474e86b557"
	I1027 20:03:51.608675  476969 cri.go:89] found id: ""
	I1027 20:03:51.608723  476969 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 20:03:51.623806  476969 out.go:203] 
	W1027 20:03:51.626623  476969 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T20:03:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T20:03:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 20:03:51.626640  476969 out.go:285] * 
	* 
	W1027 20:03:51.634073  476969 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 20:03:51.637755  476969 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-702588 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-702588
helpers_test.go:243: (dbg) docker inspect newest-cni-702588:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "129b04b839d9dc50e5b41cb4e74b918aa12b2f3b4e01d95ffdf708dd0ebe16e9",
	        "Created": "2025-10-27T20:02:51.266194536Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 474490,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T20:03:32.861200098Z",
	            "FinishedAt": "2025-10-27T20:03:31.808461582Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/129b04b839d9dc50e5b41cb4e74b918aa12b2f3b4e01d95ffdf708dd0ebe16e9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/129b04b839d9dc50e5b41cb4e74b918aa12b2f3b4e01d95ffdf708dd0ebe16e9/hostname",
	        "HostsPath": "/var/lib/docker/containers/129b04b839d9dc50e5b41cb4e74b918aa12b2f3b4e01d95ffdf708dd0ebe16e9/hosts",
	        "LogPath": "/var/lib/docker/containers/129b04b839d9dc50e5b41cb4e74b918aa12b2f3b4e01d95ffdf708dd0ebe16e9/129b04b839d9dc50e5b41cb4e74b918aa12b2f3b4e01d95ffdf708dd0ebe16e9-json.log",
	        "Name": "/newest-cni-702588",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-702588:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-702588",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "129b04b839d9dc50e5b41cb4e74b918aa12b2f3b4e01d95ffdf708dd0ebe16e9",
	                "LowerDir": "/var/lib/docker/overlay2/e3fc032b9e2d815c987fdb238001c8985f39b4a9a5af185df55765123364c912-init/diff:/var/lib/docker/overlay2/0567218bbe05e8b59b2aef7ad82032d6716040c1f9d9d73e7de8682101079f76/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e3fc032b9e2d815c987fdb238001c8985f39b4a9a5af185df55765123364c912/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e3fc032b9e2d815c987fdb238001c8985f39b4a9a5af185df55765123364c912/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e3fc032b9e2d815c987fdb238001c8985f39b4a9a5af185df55765123364c912/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-702588",
	                "Source": "/var/lib/docker/volumes/newest-cni-702588/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-702588",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-702588",
	                "name.minikube.sigs.k8s.io": "newest-cni-702588",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f156dade5f94d40c31cd1a99daa741f2c6b2e78cbb0b0daac60177b574d66ac0",
	            "SandboxKey": "/var/run/docker/netns/f156dade5f94",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-702588": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:49:35:2b:c1:4f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "534dc751a83fae44f47788d8acf2bcc801410f0472bc4104a1e93fed2fe7f7ff",
	                    "EndpointID": "0279d48705f0c12285c404d320b13ca1f1b2853c84319df3e83e9f61d5d0cfc2",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-702588",
	                        "129b04b839d9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-702588 -n newest-cni-702588
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-702588 -n newest-cni-702588: exit status 2 (471.737741ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-702588 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-702588 logs -n 25: (1.676655361s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-629838 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │                     │
	│ stop    │ -p embed-certs-629838 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:01 UTC │
	│ addons  │ enable dashboard -p embed-certs-629838 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:01 UTC │
	│ start   │ -p embed-certs-629838 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:02 UTC │
	│ image   │ no-preload-300878 image list --format=json                                                                                                                                                                                                    │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:01 UTC │
	│ pause   │ -p no-preload-300878 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │                     │
	│ delete  │ -p no-preload-300878                                                                                                                                                                                                                          │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:01 UTC │
	│ delete  │ -p no-preload-300878                                                                                                                                                                                                                          │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:01 UTC │
	│ delete  │ -p disable-driver-mounts-230052                                                                                                                                                                                                               │ disable-driver-mounts-230052 │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │ 27 Oct 25 20:02 UTC │
	│ start   │ -p default-k8s-diff-port-073048 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-073048 │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │ 27 Oct 25 20:03 UTC │
	│ image   │ embed-certs-629838 image list --format=json                                                                                                                                                                                                   │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │ 27 Oct 25 20:02 UTC │
	│ pause   │ -p embed-certs-629838 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │                     │
	│ delete  │ -p embed-certs-629838                                                                                                                                                                                                                         │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │ 27 Oct 25 20:02 UTC │
	│ delete  │ -p embed-certs-629838                                                                                                                                                                                                                         │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │ 27 Oct 25 20:02 UTC │
	│ start   │ -p newest-cni-702588 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-702588            │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │ 27 Oct 25 20:03 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-073048 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-073048 │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-702588 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-702588            │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │                     │
	│ stop    │ -p newest-cni-702588 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-702588            │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │ 27 Oct 25 20:03 UTC │
	│ stop    │ -p default-k8s-diff-port-073048 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-073048 │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │ 27 Oct 25 20:03 UTC │
	│ addons  │ enable dashboard -p newest-cni-702588 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-702588            │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │ 27 Oct 25 20:03 UTC │
	│ start   │ -p newest-cni-702588 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-702588            │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │ 27 Oct 25 20:03 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-073048 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-073048 │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │ 27 Oct 25 20:03 UTC │
	│ start   │ -p default-k8s-diff-port-073048 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-073048 │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │                     │
	│ image   │ newest-cni-702588 image list --format=json                                                                                                                                                                                                    │ newest-cni-702588            │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │ 27 Oct 25 20:03 UTC │
	│ pause   │ -p newest-cni-702588 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-702588            │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 20:03:43
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 20:03:43.581090  475934 out.go:360] Setting OutFile to fd 1 ...
	I1027 20:03:43.581322  475934 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:03:43.581351  475934 out.go:374] Setting ErrFile to fd 2...
	I1027 20:03:43.581371  475934 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:03:43.581652  475934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 20:03:43.582057  475934 out.go:368] Setting JSON to false
	I1027 20:03:43.583128  475934 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9976,"bootTime":1761585448,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1027 20:03:43.583221  475934 start.go:141] virtualization:  
	I1027 20:03:43.586248  475934 out.go:179] * [default-k8s-diff-port-073048] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 20:03:43.590250  475934 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 20:03:43.590337  475934 notify.go:220] Checking for updates...
	I1027 20:03:43.597073  475934 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 20:03:43.599979  475934 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 20:03:43.603000  475934 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-266035/.minikube
	I1027 20:03:43.605841  475934 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 20:03:43.608703  475934 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 20:03:43.612059  475934 config.go:182] Loaded profile config "default-k8s-diff-port-073048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:03:43.612670  475934 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 20:03:43.650725  475934 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 20:03:43.650858  475934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 20:03:43.760722  475934 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-27 20:03:43.745833378 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 20:03:43.760838  475934 docker.go:318] overlay module found
	I1027 20:03:43.763981  475934 out.go:179] * Using the docker driver based on existing profile
	I1027 20:03:43.766853  475934 start.go:305] selected driver: docker
	I1027 20:03:43.766875  475934 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-073048 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-073048 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 20:03:43.766977  475934 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 20:03:43.767782  475934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 20:03:43.872318  475934 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-27 20:03:43.85744425 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 20:03:43.872682  475934 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 20:03:43.872716  475934 cni.go:84] Creating CNI manager for ""
	I1027 20:03:43.872783  475934 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 20:03:43.872828  475934 start.go:349] cluster config:
	{Name:default-k8s-diff-port-073048 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-073048 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 20:03:43.876147  475934 out.go:179] * Starting "default-k8s-diff-port-073048" primary control-plane node in "default-k8s-diff-port-073048" cluster
	I1027 20:03:43.879075  475934 cache.go:123] Beginning downloading kic base image for docker with crio
	I1027 20:03:43.882000  475934 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 20:03:43.884874  475934 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 20:03:43.884937  475934 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1027 20:03:43.884946  475934 cache.go:58] Caching tarball of preloaded images
	I1027 20:03:43.885049  475934 preload.go:233] Found /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1027 20:03:43.885059  475934 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 20:03:43.885176  475934 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/config.json ...
	I1027 20:03:43.885395  475934 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 20:03:43.911590  475934 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 20:03:43.911619  475934 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 20:03:43.911643  475934 cache.go:232] Successfully downloaded all kic artifacts
	I1027 20:03:43.911667  475934 start.go:360] acquireMachinesLock for default-k8s-diff-port-073048: {Name:mk90694371f699bc05745bfd1e2e3f9abdf20057 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 20:03:43.911726  475934 start.go:364] duration metric: took 35.61µs to acquireMachinesLock for "default-k8s-diff-port-073048"
	I1027 20:03:43.911750  475934 start.go:96] Skipping create...Using existing machine configuration
	I1027 20:03:43.911760  475934 fix.go:54] fixHost starting: 
	I1027 20:03:43.912032  475934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-073048 --format={{.State.Status}}
	I1027 20:03:43.941231  475934 fix.go:112] recreateIfNeeded on default-k8s-diff-port-073048: state=Stopped err=<nil>
	W1027 20:03:43.941265  475934 fix.go:138] unexpected machine state, will restart: <nil>
	I1027 20:03:47.547109  474316 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.989658618s)
	I1027 20:03:47.547164  474316 api_server.go:52] waiting for apiserver process to appear ...
	I1027 20:03:47.547199  474316 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.05710845s)
	I1027 20:03:47.547227  474316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 20:03:47.547271  474316 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.934889957s)
	I1027 20:03:47.748130  474316 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.826812386s)
	I1027 20:03:47.748165  474316 api_server.go:72] duration metric: took 7.581306532s to wait for apiserver process to appear ...
	I1027 20:03:47.748179  474316 api_server.go:88] waiting for apiserver healthz status ...
	I1027 20:03:47.748199  474316 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 20:03:47.751183  474316 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-702588 addons enable metrics-server
	
	I1027 20:03:47.754129  474316 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1027 20:03:47.757352  474316 addons.go:514] duration metric: took 7.590245975s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1027 20:03:47.760431  474316 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 20:03:47.760456  474316 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 20:03:48.249053  474316 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 20:03:48.268262  474316 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1027 20:03:48.269429  474316 api_server.go:141] control plane version: v1.34.1
	I1027 20:03:48.269453  474316 api_server.go:131] duration metric: took 521.268795ms to wait for apiserver health ...
	I1027 20:03:48.269463  474316 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 20:03:48.277749  474316 system_pods.go:59] 8 kube-system pods found
	I1027 20:03:48.277786  474316 system_pods.go:61] "coredns-66bc5c9577-xclwd" [eee638fa-65a2-4c75-ba2c-7615f09c51da] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1027 20:03:48.277795  474316 system_pods.go:61] "etcd-newest-cni-702588" [84702404-c34c-450f-a8c7-f94b0088ac21] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 20:03:48.277801  474316 system_pods.go:61] "kindnet-7ctmm" [98e70164-cd51-4563-91d0-7c0bae3c2ade] Running
	I1027 20:03:48.277808  474316 system_pods.go:61] "kube-apiserver-newest-cni-702588" [e508c926-b287-4ae8-83a6-a1a4360c85f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 20:03:48.277814  474316 system_pods.go:61] "kube-controller-manager-newest-cni-702588" [01fa6132-66de-422f-bbd3-2c1e46280199] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 20:03:48.277824  474316 system_pods.go:61] "kube-proxy-k9lhg" [f36ed32e-d331-485d-ba07-01353f65e231] Running
	I1027 20:03:48.277830  474316 system_pods.go:61] "kube-scheduler-newest-cni-702588" [6089c80f-86d4-4837-9eaf-2e473ed151d5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 20:03:48.277839  474316 system_pods.go:61] "storage-provisioner" [9074befc-b06a-4ae1-8cf5-5544c94b2e07] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1027 20:03:48.277846  474316 system_pods.go:74] duration metric: took 8.377678ms to wait for pod list to return data ...
	I1027 20:03:48.277859  474316 default_sa.go:34] waiting for default service account to be created ...
	I1027 20:03:48.283960  474316 default_sa.go:45] found service account: "default"
	I1027 20:03:48.283983  474316 default_sa.go:55] duration metric: took 6.119164ms for default service account to be created ...
	I1027 20:03:48.283995  474316 kubeadm.go:586] duration metric: took 8.117139466s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1027 20:03:48.284010  474316 node_conditions.go:102] verifying NodePressure condition ...
	I1027 20:03:48.286653  474316 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 20:03:48.286683  474316 node_conditions.go:123] node cpu capacity is 2
	I1027 20:03:48.286696  474316 node_conditions.go:105] duration metric: took 2.681172ms to run NodePressure ...
	I1027 20:03:48.286708  474316 start.go:241] waiting for startup goroutines ...
	I1027 20:03:48.286715  474316 start.go:246] waiting for cluster config update ...
	I1027 20:03:48.286726  474316 start.go:255] writing updated cluster config ...
	I1027 20:03:48.287040  474316 ssh_runner.go:195] Run: rm -f paused
	I1027 20:03:48.407593  474316 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 20:03:48.412953  474316 out.go:179] * Done! kubectl is now configured to use "newest-cni-702588" cluster and "default" namespace by default
	I1027 20:03:43.944510  475934 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-073048" ...
	I1027 20:03:43.944595  475934 cli_runner.go:164] Run: docker start default-k8s-diff-port-073048
	I1027 20:03:44.374054  475934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-073048 --format={{.State.Status}}
	I1027 20:03:44.405561  475934 kic.go:430] container "default-k8s-diff-port-073048" state is running.
	I1027 20:03:44.405957  475934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-073048
	I1027 20:03:44.449512  475934 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/config.json ...
	I1027 20:03:44.449738  475934 machine.go:93] provisionDockerMachine start ...
	I1027 20:03:44.449802  475934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:03:44.481888  475934 main.go:141] libmachine: Using SSH client type: native
	I1027 20:03:44.482207  475934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1027 20:03:44.482216  475934 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 20:03:44.483577  475934 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1027 20:03:47.670596  475934 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-073048
	
	I1027 20:03:47.670673  475934 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-073048"
	I1027 20:03:47.670782  475934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:03:47.699086  475934 main.go:141] libmachine: Using SSH client type: native
	I1027 20:03:47.699380  475934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1027 20:03:47.699391  475934 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-073048 && echo "default-k8s-diff-port-073048" | sudo tee /etc/hostname
	I1027 20:03:47.881587  475934 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-073048
	
	I1027 20:03:47.881661  475934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:03:47.902977  475934 main.go:141] libmachine: Using SSH client type: native
	I1027 20:03:47.903323  475934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1027 20:03:47.903351  475934 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-073048' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-073048/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-073048' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 20:03:48.059589  475934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 20:03:48.059693  475934 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21801-266035/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-266035/.minikube}
	I1027 20:03:48.059728  475934 ubuntu.go:190] setting up certificates
	I1027 20:03:48.059738  475934 provision.go:84] configureAuth start
	I1027 20:03:48.059839  475934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-073048
	I1027 20:03:48.081673  475934 provision.go:143] copyHostCerts
	I1027 20:03:48.081745  475934 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem, removing ...
	I1027 20:03:48.081765  475934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem
	I1027 20:03:48.081851  475934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem (1078 bytes)
	I1027 20:03:48.081962  475934 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem, removing ...
	I1027 20:03:48.081970  475934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem
	I1027 20:03:48.081999  475934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem (1123 bytes)
	I1027 20:03:48.082107  475934 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem, removing ...
	I1027 20:03:48.082118  475934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem
	I1027 20:03:48.082145  475934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem (1675 bytes)
	I1027 20:03:48.082211  475934 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-073048 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-073048 localhost minikube]
	I1027 20:03:48.519546  475934 provision.go:177] copyRemoteCerts
	I1027 20:03:48.519680  475934 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 20:03:48.519741  475934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:03:48.554944  475934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/default-k8s-diff-port-073048/id_rsa Username:docker}
	I1027 20:03:48.674694  475934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 20:03:48.692859  475934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1027 20:03:48.713897  475934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1027 20:03:48.737836  475934 provision.go:87] duration metric: took 678.068748ms to configureAuth
	I1027 20:03:48.737867  475934 ubuntu.go:206] setting minikube options for container-runtime
	I1027 20:03:48.738118  475934 config.go:182] Loaded profile config "default-k8s-diff-port-073048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:03:48.738275  475934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:03:48.759617  475934 main.go:141] libmachine: Using SSH client type: native
	I1027 20:03:48.759944  475934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1027 20:03:48.760036  475934 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 20:03:49.246376  475934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 20:03:49.246406  475934 machine.go:96] duration metric: took 4.796659492s to provisionDockerMachine
	I1027 20:03:49.246417  475934 start.go:293] postStartSetup for "default-k8s-diff-port-073048" (driver="docker")
	I1027 20:03:49.246428  475934 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 20:03:49.246494  475934 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 20:03:49.246540  475934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:03:49.282810  475934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/default-k8s-diff-port-073048/id_rsa Username:docker}
	I1027 20:03:49.405968  475934 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 20:03:49.410064  475934 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 20:03:49.410096  475934 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 20:03:49.410107  475934 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-266035/.minikube/addons for local assets ...
	I1027 20:03:49.410160  475934 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-266035/.minikube/files for local assets ...
	I1027 20:03:49.410238  475934 filesync.go:149] local asset: /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem -> 2678802.pem in /etc/ssl/certs
	I1027 20:03:49.410340  475934 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 20:03:49.419232  475934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem --> /etc/ssl/certs/2678802.pem (1708 bytes)
	I1027 20:03:49.444631  475934 start.go:296] duration metric: took 198.199674ms for postStartSetup
	I1027 20:03:49.444717  475934 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 20:03:49.444771  475934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:03:49.476358  475934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/default-k8s-diff-port-073048/id_rsa Username:docker}
	I1027 20:03:49.600319  475934 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 20:03:49.610759  475934 fix.go:56] duration metric: took 5.698990093s for fixHost
	I1027 20:03:49.610796  475934 start.go:83] releasing machines lock for "default-k8s-diff-port-073048", held for 5.699057431s
	I1027 20:03:49.610865  475934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-073048
	I1027 20:03:49.634747  475934 ssh_runner.go:195] Run: cat /version.json
	I1027 20:03:49.634804  475934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:03:49.635329  475934 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 20:03:49.635400  475934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:03:49.684000  475934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/default-k8s-diff-port-073048/id_rsa Username:docker}
	I1027 20:03:49.687222  475934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/default-k8s-diff-port-073048/id_rsa Username:docker}
	I1027 20:03:49.905984  475934 ssh_runner.go:195] Run: systemctl --version
	I1027 20:03:49.912792  475934 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 20:03:49.966670  475934 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 20:03:49.971777  475934 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 20:03:49.971932  475934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 20:03:49.985109  475934 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 20:03:49.985185  475934 start.go:495] detecting cgroup driver to use...
	I1027 20:03:49.985252  475934 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1027 20:03:49.985336  475934 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 20:03:50.010750  475934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 20:03:50.029734  475934 docker.go:218] disabling cri-docker service (if available) ...
	I1027 20:03:50.029859  475934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 20:03:50.057013  475934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 20:03:50.079165  475934 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 20:03:50.217217  475934 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 20:03:50.392972  475934 docker.go:234] disabling docker service ...
	I1027 20:03:50.393035  475934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 20:03:50.426106  475934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 20:03:50.445992  475934 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 20:03:50.592981  475934 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 20:03:50.736668  475934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 20:03:50.752010  475934 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 20:03:50.768026  475934 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 20:03:50.768129  475934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:03:50.777003  475934 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 20:03:50.777090  475934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:03:50.785883  475934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:03:50.794657  475934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:03:50.803525  475934 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 20:03:50.811804  475934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:03:50.821076  475934 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:03:50.829683  475934 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:03:50.839838  475934 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 20:03:50.847297  475934 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 20:03:50.854648  475934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:03:50.967326  475934 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 20:03:51.108240  475934 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 20:03:51.108368  475934 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 20:03:51.112809  475934 start.go:563] Will wait 60s for crictl version
	I1027 20:03:51.112914  475934 ssh_runner.go:195] Run: which crictl
	I1027 20:03:51.117099  475934 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 20:03:51.143576  475934 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 20:03:51.143715  475934 ssh_runner.go:195] Run: crio --version
	I1027 20:03:51.178778  475934 ssh_runner.go:195] Run: crio --version
	I1027 20:03:51.214398  475934 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	
	==> CRI-O <==
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.712339081Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.720449091Z" level=info msg="Running pod sandbox: kube-system/kindnet-7ctmm/POD" id=8ebb7cf5-a1cd-4278-8dbe-04c98c167ae3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.720515296Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.723739272Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=5041df47-f104-4e14-9abd-7da5da36e687 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.735732783Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=8ebb7cf5-a1cd-4278-8dbe-04c98c167ae3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.738877606Z" level=info msg="Ran pod sandbox b6df27157f4fb552556df219139a41388f68c93e90b0f6bf8a1bbd8ef7ae001e with infra container: kube-system/kindnet-7ctmm/POD" id=8ebb7cf5-a1cd-4278-8dbe-04c98c167ae3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.743619364Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=58ff6189-e930-47e0-b145-b6db6214b926 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.748068356Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=81fcac47-e054-454f-9aa8-54d3460be5f9 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.753358908Z" level=info msg="Creating container: kube-system/kindnet-7ctmm/kindnet-cni" id=83d61062-6683-4dbf-84d9-6a6aff680867 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.753593428Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.756716787Z" level=info msg="Ran pod sandbox 3a57f0ae66f27e1bce32f99c1837f26a5d8b2b06f034a0c917c4dd9011372b52 with infra container: kube-system/kube-proxy-k9lhg/POD" id=5041df47-f104-4e14-9abd-7da5da36e687 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.757924302Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=408aa198-e9a7-42eb-b621-0566c226e1c0 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.759813726Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=6c71ab18-a600-48cc-9d89-8c5fbff56691 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.762812732Z" level=info msg="Creating container: kube-system/kube-proxy-k9lhg/kube-proxy" id=ee4eef20-dc23-44b2-8e62-bd9822c86655 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.763313551Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.773916135Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.779342806Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.781703298Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.785420051Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.838579633Z" level=info msg="Created container a70a228324365c1e561d8cfe7dd7a2028a7e30509835bde732e44b3427b70131: kube-system/kube-proxy-k9lhg/kube-proxy" id=ee4eef20-dc23-44b2-8e62-bd9822c86655 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.838860946Z" level=info msg="Created container 20f5ab471ade7a086d7d0e7f822d761ca44030cdc590decc67a3adffd147ad8f: kube-system/kindnet-7ctmm/kindnet-cni" id=83d61062-6683-4dbf-84d9-6a6aff680867 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.83992481Z" level=info msg="Starting container: a70a228324365c1e561d8cfe7dd7a2028a7e30509835bde732e44b3427b70131" id=a2d9af65-fdc7-49a5-844a-4ec87eda2aac name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.83997024Z" level=info msg="Starting container: 20f5ab471ade7a086d7d0e7f822d761ca44030cdc590decc67a3adffd147ad8f" id=ee82222a-e4e9-4d2d-b5e2-7f6af3300119 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.842203154Z" level=info msg="Started container" PID=1055 containerID=20f5ab471ade7a086d7d0e7f822d761ca44030cdc590decc67a3adffd147ad8f description=kube-system/kindnet-7ctmm/kindnet-cni id=ee82222a-e4e9-4d2d-b5e2-7f6af3300119 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b6df27157f4fb552556df219139a41388f68c93e90b0f6bf8a1bbd8ef7ae001e
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.852230796Z" level=info msg="Started container" PID=1060 containerID=a70a228324365c1e561d8cfe7dd7a2028a7e30509835bde732e44b3427b70131 description=kube-system/kube-proxy-k9lhg/kube-proxy id=a2d9af65-fdc7-49a5-844a-4ec87eda2aac name=/runtime.v1.RuntimeService/StartContainer sandboxID=3a57f0ae66f27e1bce32f99c1837f26a5d8b2b06f034a0c917c4dd9011372b52
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	a70a228324365       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   6 seconds ago       Running             kube-proxy                1                   3a57f0ae66f27       kube-proxy-k9lhg                            kube-system
	20f5ab471ade7       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   6 seconds ago       Running             kindnet-cni               1                   b6df27157f4fb       kindnet-7ctmm                               kube-system
	50e1303f4abc6       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   13 seconds ago      Running             kube-scheduler            1                   1c0069033e5d6       kube-scheduler-newest-cni-702588            kube-system
	8d587d21ae021       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   13 seconds ago      Running             kube-controller-manager   1                   c71c38a730a67       kube-controller-manager-newest-cni-702588   kube-system
	72abb4993a475       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   13 seconds ago      Running             kube-apiserver            1                   ae27e5ca1a858       kube-apiserver-newest-cni-702588            kube-system
	f8fab5749ac40       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   13 seconds ago      Running             etcd                      1                   ce0b0cdef8fd6       etcd-newest-cni-702588                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-702588
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-702588
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=newest-cni-702588
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T20_03_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 20:03:18 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-702588
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 20:03:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 20:03:46 +0000   Mon, 27 Oct 2025 20:03:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 20:03:46 +0000   Mon, 27 Oct 2025 20:03:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 20:03:46 +0000   Mon, 27 Oct 2025 20:03:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 27 Oct 2025 20:03:46 +0000   Mon, 27 Oct 2025 20:03:13 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-702588
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                4ba06414-4234-4b37-9dae-dda0eb66f304
	  Boot ID:                    23ea05b4-8203-4fb9-a84a-5deb4b091cbb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-702588                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         32s
	  kube-system                 kindnet-7ctmm                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-newest-cni-702588             250m (12%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-newest-cni-702588    200m (10%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-k9lhg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-newest-cni-702588             100m (5%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 24s                kube-proxy       
	  Normal   Starting                 5s                 kube-proxy       
	  Warning  CgroupV1                 41s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  41s (x9 over 41s)  kubelet          Node newest-cni-702588 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    41s (x8 over 41s)  kubelet          Node newest-cni-702588 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     41s (x7 over 41s)  kubelet          Node newest-cni-702588 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  32s                kubelet          Node newest-cni-702588 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 32s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    32s                kubelet          Node newest-cni-702588 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     32s                kubelet          Node newest-cni-702588 status is now: NodeHasSufficientPID
	  Normal   Starting                 32s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           28s                node-controller  Node newest-cni-702588 event: Registered Node newest-cni-702588 in Controller
	  Normal   Starting                 14s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 14s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  14s (x8 over 14s)  kubelet          Node newest-cni-702588 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14s (x8 over 14s)  kubelet          Node newest-cni-702588 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14s (x8 over 14s)  kubelet          Node newest-cni-702588 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s                 node-controller  Node newest-cni-702588 event: Registered Node newest-cni-702588 in Controller
	
	
	==> dmesg <==
	[  +7.015891] overlayfs: idmapped layers are currently not supported
	[Oct27 19:41] overlayfs: idmapped layers are currently not supported
	[ +28.455216] overlayfs: idmapped layers are currently not supported
	[Oct27 19:42] overlayfs: idmapped layers are currently not supported
	[ +26.490174] overlayfs: idmapped layers are currently not supported
	[Oct27 19:44] overlayfs: idmapped layers are currently not supported
	[Oct27 19:45] overlayfs: idmapped layers are currently not supported
	[Oct27 19:47] overlayfs: idmapped layers are currently not supported
	[Oct27 19:49] overlayfs: idmapped layers are currently not supported
	[ +31.410335] overlayfs: idmapped layers are currently not supported
	[Oct27 19:51] overlayfs: idmapped layers are currently not supported
	[Oct27 19:53] overlayfs: idmapped layers are currently not supported
	[Oct27 19:54] overlayfs: idmapped layers are currently not supported
	[Oct27 19:55] overlayfs: idmapped layers are currently not supported
	[Oct27 19:56] overlayfs: idmapped layers are currently not supported
	[Oct27 19:57] overlayfs: idmapped layers are currently not supported
	[Oct27 19:58] overlayfs: idmapped layers are currently not supported
	[Oct27 19:59] overlayfs: idmapped layers are currently not supported
	[Oct27 20:00] overlayfs: idmapped layers are currently not supported
	[ +41.321877] overlayfs: idmapped layers are currently not supported
	[Oct27 20:01] overlayfs: idmapped layers are currently not supported
	[Oct27 20:02] overlayfs: idmapped layers are currently not supported
	[Oct27 20:03] overlayfs: idmapped layers are currently not supported
	[ +26.735505] overlayfs: idmapped layers are currently not supported
	[ +12.481352] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [f8fab5749ac40e20fd27699d7357e78ed023c1efecb49166aa43e7474e86b557] <==
	{"level":"warn","ts":"2025-10-27T20:03:43.550597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:43.572783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:43.600339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:43.659272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:43.681954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:43.739206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:43.806200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:43.909622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:43.915136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:43.965253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:43.991617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:44.032678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:44.048273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:44.066740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:44.121125Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:44.176186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:44.236740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:44.302048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:44.353725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:44.414281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:44.458622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:44.520073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:44.566522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:44.593988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:44.763955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50534","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:03:53 up  2:46,  0 user,  load average: 4.74, 3.40, 2.81
	Linux newest-cni-702588 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [20f5ab471ade7a086d7d0e7f822d761ca44030cdc590decc67a3adffd147ad8f] <==
	I1027 20:03:46.938531       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 20:03:46.938837       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1027 20:03:46.939095       1 main.go:148] setting mtu 1500 for CNI 
	I1027 20:03:46.939124       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 20:03:46.939135       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T20:03:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 20:03:47.149264       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 20:03:47.149357       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 20:03:47.149390       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 20:03:47.151078       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [72abb4993a475217f2a8c95e18b5ddddfda40e2dc2e0cf42b38d6d9ef04c3a63] <==
	I1027 20:03:46.371214       1 aggregator.go:171] initial CRD sync complete...
	I1027 20:03:46.371226       1 autoregister_controller.go:144] Starting autoregister controller
	I1027 20:03:46.371234       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 20:03:46.371240       1 cache.go:39] Caches are synced for autoregister controller
	I1027 20:03:46.388061       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1027 20:03:46.388117       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 20:03:46.388143       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 20:03:46.421127       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1027 20:03:46.421257       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1027 20:03:46.424890       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1027 20:03:46.424943       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1027 20:03:46.439599       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1027 20:03:46.467010       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1027 20:03:46.607131       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 20:03:46.833264       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 20:03:47.074522       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 20:03:47.226970       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 20:03:47.352787       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 20:03:47.375467       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 20:03:47.697449       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.224.93"}
	I1027 20:03:47.739895       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.237.64"}
	I1027 20:03:49.879687       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 20:03:49.930721       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 20:03:50.073932       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 20:03:50.221345       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [8d587d21ae021230e28963c8c71ea231d0d95d971ef978ac498061c57511609b] <==
	I1027 20:03:49.677054       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 20:03:49.677104       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 20:03:49.677169       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-702588"
	I1027 20:03:49.677200       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1027 20:03:49.677292       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1027 20:03:49.677440       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1027 20:03:49.678304       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1027 20:03:49.678360       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 20:03:49.678402       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1027 20:03:49.678460       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1027 20:03:49.678655       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 20:03:49.694419       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 20:03:49.694715       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1027 20:03:49.694837       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1027 20:03:49.694859       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 20:03:49.694935       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 20:03:49.712563       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 20:03:49.712742       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 20:03:49.713000       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 20:03:49.714267       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1027 20:03:49.714417       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1027 20:03:49.717720       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 20:03:49.719155       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 20:03:49.720391       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 20:03:49.726215       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	
	
	==> kube-proxy [a70a228324365c1e561d8cfe7dd7a2028a7e30509835bde732e44b3427b70131] <==
	I1027 20:03:47.425988       1 server_linux.go:53] "Using iptables proxy"
	I1027 20:03:47.663894       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 20:03:47.766228       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 20:03:47.766356       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1027 20:03:47.766506       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 20:03:47.825621       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 20:03:47.825746       1 server_linux.go:132] "Using iptables Proxier"
	I1027 20:03:47.829705       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 20:03:47.830192       1 server.go:527] "Version info" version="v1.34.1"
	I1027 20:03:47.830463       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 20:03:47.832326       1 config.go:200] "Starting service config controller"
	I1027 20:03:47.834833       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 20:03:47.832630       1 config.go:106] "Starting endpoint slice config controller"
	I1027 20:03:47.835292       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 20:03:47.832649       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 20:03:47.835410       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 20:03:47.835470       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 20:03:47.833745       1 config.go:309] "Starting node config controller"
	I1027 20:03:47.835550       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 20:03:47.835592       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 20:03:47.935240       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 20:03:47.935454       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [50e1303f4abc6c0afd542eb75bdf66db50d8a97bbe1ab2251b60144ee40ebbdf] <==
	I1027 20:03:42.539104       1 serving.go:386] Generated self-signed cert in-memory
	I1027 20:03:47.534507       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 20:03:47.534966       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 20:03:47.609503       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 20:03:47.609689       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1027 20:03:47.609712       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1027 20:03:47.609740       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 20:03:47.612823       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 20:03:47.612843       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 20:03:47.612862       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 20:03:47.612867       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 20:03:47.710450       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1027 20:03:47.714932       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 20:03:47.715269       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: I1027 20:03:46.389792     726 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-702588"
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: I1027 20:03:46.389889     726 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-702588"
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: I1027 20:03:46.389916     726 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: I1027 20:03:46.390803     726 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: I1027 20:03:46.399082     726 apiserver.go:52] "Watching apiserver"
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: I1027 20:03:46.399349     726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-702588"
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: E1027 20:03:46.449043     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-702588\" already exists" pod="kube-system/kube-scheduler-newest-cni-702588"
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: I1027 20:03:46.449077     726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-702588"
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: E1027 20:03:46.449296     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-702588\" already exists" pod="kube-system/kube-scheduler-newest-cni-702588"
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: E1027 20:03:46.477957     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-702588\" already exists" pod="kube-system/etcd-newest-cni-702588"
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: I1027 20:03:46.477992     726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-702588"
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: I1027 20:03:46.487719     726 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: E1027 20:03:46.540982     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-702588\" already exists" pod="kube-system/kube-apiserver-newest-cni-702588"
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: I1027 20:03:46.541015     726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-702588"
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: E1027 20:03:46.577228     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-702588\" already exists" pod="kube-system/kube-controller-manager-newest-cni-702588"
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: I1027 20:03:46.584777     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98e70164-cd51-4563-91d0-7c0bae3c2ade-lib-modules\") pod \"kindnet-7ctmm\" (UID: \"98e70164-cd51-4563-91d0-7c0bae3c2ade\") " pod="kube-system/kindnet-7ctmm"
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: I1027 20:03:46.584855     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f36ed32e-d331-485d-ba07-01353f65e231-xtables-lock\") pod \"kube-proxy-k9lhg\" (UID: \"f36ed32e-d331-485d-ba07-01353f65e231\") " pod="kube-system/kube-proxy-k9lhg"
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: I1027 20:03:46.584881     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f36ed32e-d331-485d-ba07-01353f65e231-lib-modules\") pod \"kube-proxy-k9lhg\" (UID: \"f36ed32e-d331-485d-ba07-01353f65e231\") " pod="kube-system/kube-proxy-k9lhg"
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: I1027 20:03:46.584899     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/98e70164-cd51-4563-91d0-7c0bae3c2ade-cni-cfg\") pod \"kindnet-7ctmm\" (UID: \"98e70164-cd51-4563-91d0-7c0bae3c2ade\") " pod="kube-system/kindnet-7ctmm"
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: I1027 20:03:46.584920     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98e70164-cd51-4563-91d0-7c0bae3c2ade-xtables-lock\") pod \"kindnet-7ctmm\" (UID: \"98e70164-cd51-4563-91d0-7c0bae3c2ade\") " pod="kube-system/kindnet-7ctmm"
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: I1027 20:03:46.647702     726 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: W1027 20:03:46.751571     726 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/129b04b839d9dc50e5b41cb4e74b918aa12b2f3b4e01d95ffdf708dd0ebe16e9/crio-3a57f0ae66f27e1bce32f99c1837f26a5d8b2b06f034a0c917c4dd9011372b52 WatchSource:0}: Error finding container 3a57f0ae66f27e1bce32f99c1837f26a5d8b2b06f034a0c917c4dd9011372b52: Status 404 returned error can't find the container with id 3a57f0ae66f27e1bce32f99c1837f26a5d8b2b06f034a0c917c4dd9011372b52
	Oct 27 20:03:49 newest-cni-702588 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 20:03:49 newest-cni-702588 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 20:03:49 newest-cni-702588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-702588 -n newest-cni-702588
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-702588 -n newest-cni-702588: exit status 2 (489.751369ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-702588 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-xclwd storage-provisioner dashboard-metrics-scraper-6ffb444bf9-whld8 kubernetes-dashboard-855c9754f9-bl8m9
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-702588 describe pod coredns-66bc5c9577-xclwd storage-provisioner dashboard-metrics-scraper-6ffb444bf9-whld8 kubernetes-dashboard-855c9754f9-bl8m9
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-702588 describe pod coredns-66bc5c9577-xclwd storage-provisioner dashboard-metrics-scraper-6ffb444bf9-whld8 kubernetes-dashboard-855c9754f9-bl8m9: exit status 1 (127.996236ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-xclwd" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-whld8" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-bl8m9" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-702588 describe pod coredns-66bc5c9577-xclwd storage-provisioner dashboard-metrics-scraper-6ffb444bf9-whld8 kubernetes-dashboard-855c9754f9-bl8m9: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-702588
helpers_test.go:243: (dbg) docker inspect newest-cni-702588:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "129b04b839d9dc50e5b41cb4e74b918aa12b2f3b4e01d95ffdf708dd0ebe16e9",
	        "Created": "2025-10-27T20:02:51.266194536Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 474490,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T20:03:32.861200098Z",
	            "FinishedAt": "2025-10-27T20:03:31.808461582Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/129b04b839d9dc50e5b41cb4e74b918aa12b2f3b4e01d95ffdf708dd0ebe16e9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/129b04b839d9dc50e5b41cb4e74b918aa12b2f3b4e01d95ffdf708dd0ebe16e9/hostname",
	        "HostsPath": "/var/lib/docker/containers/129b04b839d9dc50e5b41cb4e74b918aa12b2f3b4e01d95ffdf708dd0ebe16e9/hosts",
	        "LogPath": "/var/lib/docker/containers/129b04b839d9dc50e5b41cb4e74b918aa12b2f3b4e01d95ffdf708dd0ebe16e9/129b04b839d9dc50e5b41cb4e74b918aa12b2f3b4e01d95ffdf708dd0ebe16e9-json.log",
	        "Name": "/newest-cni-702588",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-702588:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-702588",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "129b04b839d9dc50e5b41cb4e74b918aa12b2f3b4e01d95ffdf708dd0ebe16e9",
	                "LowerDir": "/var/lib/docker/overlay2/e3fc032b9e2d815c987fdb238001c8985f39b4a9a5af185df55765123364c912-init/diff:/var/lib/docker/overlay2/0567218bbe05e8b59b2aef7ad82032d6716040c1f9d9d73e7de8682101079f76/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e3fc032b9e2d815c987fdb238001c8985f39b4a9a5af185df55765123364c912/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e3fc032b9e2d815c987fdb238001c8985f39b4a9a5af185df55765123364c912/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e3fc032b9e2d815c987fdb238001c8985f39b4a9a5af185df55765123364c912/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-702588",
	                "Source": "/var/lib/docker/volumes/newest-cni-702588/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-702588",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-702588",
	                "name.minikube.sigs.k8s.io": "newest-cni-702588",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f156dade5f94d40c31cd1a99daa741f2c6b2e78cbb0b0daac60177b574d66ac0",
	            "SandboxKey": "/var/run/docker/netns/f156dade5f94",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-702588": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:49:35:2b:c1:4f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "534dc751a83fae44f47788d8acf2bcc801410f0472bc4104a1e93fed2fe7f7ff",
	                    "EndpointID": "0279d48705f0c12285c404d320b13ca1f1b2853c84319df3e83e9f61d5d0cfc2",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-702588",
	                        "129b04b839d9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-702588 -n newest-cni-702588
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-702588 -n newest-cni-702588: exit status 2 (517.279913ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-702588 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-702588 logs -n 25: (1.54680734s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p embed-certs-629838 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │                     │
	│ stop    │ -p embed-certs-629838 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:01 UTC │
	│ addons  │ enable dashboard -p embed-certs-629838 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:01 UTC │
	│ start   │ -p embed-certs-629838 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:02 UTC │
	│ image   │ no-preload-300878 image list --format=json                                                                                                                                                                                                    │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:01 UTC │
	│ pause   │ -p no-preload-300878 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │                     │
	│ delete  │ -p no-preload-300878                                                                                                                                                                                                                          │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:01 UTC │
	│ delete  │ -p no-preload-300878                                                                                                                                                                                                                          │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:01 UTC │
	│ delete  │ -p disable-driver-mounts-230052                                                                                                                                                                                                               │ disable-driver-mounts-230052 │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │ 27 Oct 25 20:02 UTC │
	│ start   │ -p default-k8s-diff-port-073048 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-073048 │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │ 27 Oct 25 20:03 UTC │
	│ image   │ embed-certs-629838 image list --format=json                                                                                                                                                                                                   │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │ 27 Oct 25 20:02 UTC │
	│ pause   │ -p embed-certs-629838 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │                     │
	│ delete  │ -p embed-certs-629838                                                                                                                                                                                                                         │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │ 27 Oct 25 20:02 UTC │
	│ delete  │ -p embed-certs-629838                                                                                                                                                                                                                         │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │ 27 Oct 25 20:02 UTC │
	│ start   │ -p newest-cni-702588 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-702588            │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │ 27 Oct 25 20:03 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-073048 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-073048 │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-702588 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-702588            │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │                     │
	│ stop    │ -p newest-cni-702588 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-702588            │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │ 27 Oct 25 20:03 UTC │
	│ stop    │ -p default-k8s-diff-port-073048 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-073048 │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │ 27 Oct 25 20:03 UTC │
	│ addons  │ enable dashboard -p newest-cni-702588 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-702588            │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │ 27 Oct 25 20:03 UTC │
	│ start   │ -p newest-cni-702588 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-702588            │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │ 27 Oct 25 20:03 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-073048 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-073048 │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │ 27 Oct 25 20:03 UTC │
	│ start   │ -p default-k8s-diff-port-073048 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-073048 │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │                     │
	│ image   │ newest-cni-702588 image list --format=json                                                                                                                                                                                                    │ newest-cni-702588            │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │ 27 Oct 25 20:03 UTC │
	│ pause   │ -p newest-cni-702588 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-702588            │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 20:03:43
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 20:03:43.581090  475934 out.go:360] Setting OutFile to fd 1 ...
	I1027 20:03:43.581322  475934 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:03:43.581351  475934 out.go:374] Setting ErrFile to fd 2...
	I1027 20:03:43.581371  475934 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:03:43.581652  475934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 20:03:43.582057  475934 out.go:368] Setting JSON to false
	I1027 20:03:43.583128  475934 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9976,"bootTime":1761585448,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1027 20:03:43.583221  475934 start.go:141] virtualization:  
	I1027 20:03:43.586248  475934 out.go:179] * [default-k8s-diff-port-073048] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 20:03:43.590250  475934 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 20:03:43.590337  475934 notify.go:220] Checking for updates...
	I1027 20:03:43.597073  475934 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 20:03:43.599979  475934 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 20:03:43.603000  475934 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-266035/.minikube
	I1027 20:03:43.605841  475934 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 20:03:43.608703  475934 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 20:03:43.612059  475934 config.go:182] Loaded profile config "default-k8s-diff-port-073048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:03:43.612670  475934 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 20:03:43.650725  475934 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 20:03:43.650858  475934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 20:03:43.760722  475934 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-27 20:03:43.745833378 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 20:03:43.760838  475934 docker.go:318] overlay module found
	I1027 20:03:43.763981  475934 out.go:179] * Using the docker driver based on existing profile
	I1027 20:03:43.766853  475934 start.go:305] selected driver: docker
	I1027 20:03:43.766875  475934 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-073048 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-073048 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 20:03:43.766977  475934 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 20:03:43.767782  475934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 20:03:43.872318  475934 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-27 20:03:43.85744425 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 20:03:43.872682  475934 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 20:03:43.872716  475934 cni.go:84] Creating CNI manager for ""
	I1027 20:03:43.872783  475934 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 20:03:43.872828  475934 start.go:349] cluster config:
	{Name:default-k8s-diff-port-073048 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-073048 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 20:03:43.876147  475934 out.go:179] * Starting "default-k8s-diff-port-073048" primary control-plane node in "default-k8s-diff-port-073048" cluster
	I1027 20:03:43.879075  475934 cache.go:123] Beginning downloading kic base image for docker with crio
	I1027 20:03:43.882000  475934 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 20:03:43.884874  475934 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 20:03:43.884937  475934 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1027 20:03:43.884946  475934 cache.go:58] Caching tarball of preloaded images
	I1027 20:03:43.885049  475934 preload.go:233] Found /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1027 20:03:43.885059  475934 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 20:03:43.885176  475934 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/config.json ...
	I1027 20:03:43.885395  475934 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 20:03:43.911590  475934 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 20:03:43.911619  475934 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 20:03:43.911643  475934 cache.go:232] Successfully downloaded all kic artifacts
	I1027 20:03:43.911667  475934 start.go:360] acquireMachinesLock for default-k8s-diff-port-073048: {Name:mk90694371f699bc05745bfd1e2e3f9abdf20057 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 20:03:43.911726  475934 start.go:364] duration metric: took 35.61µs to acquireMachinesLock for "default-k8s-diff-port-073048"
	I1027 20:03:43.911750  475934 start.go:96] Skipping create...Using existing machine configuration
	I1027 20:03:43.911760  475934 fix.go:54] fixHost starting: 
	I1027 20:03:43.912032  475934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-073048 --format={{.State.Status}}
	I1027 20:03:43.941231  475934 fix.go:112] recreateIfNeeded on default-k8s-diff-port-073048: state=Stopped err=<nil>
	W1027 20:03:43.941265  475934 fix.go:138] unexpected machine state, will restart: <nil>
	I1027 20:03:47.547109  474316 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.989658618s)
	I1027 20:03:47.547164  474316 api_server.go:52] waiting for apiserver process to appear ...
	I1027 20:03:47.547199  474316 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.05710845s)
	I1027 20:03:47.547227  474316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 20:03:47.547271  474316 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.934889957s)
	I1027 20:03:47.748130  474316 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.826812386s)
	I1027 20:03:47.748165  474316 api_server.go:72] duration metric: took 7.581306532s to wait for apiserver process to appear ...
	I1027 20:03:47.748179  474316 api_server.go:88] waiting for apiserver healthz status ...
	I1027 20:03:47.748199  474316 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 20:03:47.751183  474316 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-702588 addons enable metrics-server
	
	I1027 20:03:47.754129  474316 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1027 20:03:47.757352  474316 addons.go:514] duration metric: took 7.590245975s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1027 20:03:47.760431  474316 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 20:03:47.760456  474316 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 20:03:48.249053  474316 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 20:03:48.268262  474316 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1027 20:03:48.269429  474316 api_server.go:141] control plane version: v1.34.1
	I1027 20:03:48.269453  474316 api_server.go:131] duration metric: took 521.268795ms to wait for apiserver health ...
	I1027 20:03:48.269463  474316 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 20:03:48.277749  474316 system_pods.go:59] 8 kube-system pods found
	I1027 20:03:48.277786  474316 system_pods.go:61] "coredns-66bc5c9577-xclwd" [eee638fa-65a2-4c75-ba2c-7615f09c51da] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1027 20:03:48.277795  474316 system_pods.go:61] "etcd-newest-cni-702588" [84702404-c34c-450f-a8c7-f94b0088ac21] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 20:03:48.277801  474316 system_pods.go:61] "kindnet-7ctmm" [98e70164-cd51-4563-91d0-7c0bae3c2ade] Running
	I1027 20:03:48.277808  474316 system_pods.go:61] "kube-apiserver-newest-cni-702588" [e508c926-b287-4ae8-83a6-a1a4360c85f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 20:03:48.277814  474316 system_pods.go:61] "kube-controller-manager-newest-cni-702588" [01fa6132-66de-422f-bbd3-2c1e46280199] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 20:03:48.277824  474316 system_pods.go:61] "kube-proxy-k9lhg" [f36ed32e-d331-485d-ba07-01353f65e231] Running
	I1027 20:03:48.277830  474316 system_pods.go:61] "kube-scheduler-newest-cni-702588" [6089c80f-86d4-4837-9eaf-2e473ed151d5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 20:03:48.277839  474316 system_pods.go:61] "storage-provisioner" [9074befc-b06a-4ae1-8cf5-5544c94b2e07] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1027 20:03:48.277846  474316 system_pods.go:74] duration metric: took 8.377678ms to wait for pod list to return data ...
	I1027 20:03:48.277859  474316 default_sa.go:34] waiting for default service account to be created ...
	I1027 20:03:48.283960  474316 default_sa.go:45] found service account: "default"
	I1027 20:03:48.283983  474316 default_sa.go:55] duration metric: took 6.119164ms for default service account to be created ...
	I1027 20:03:48.283995  474316 kubeadm.go:586] duration metric: took 8.117139466s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1027 20:03:48.284010  474316 node_conditions.go:102] verifying NodePressure condition ...
	I1027 20:03:48.286653  474316 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 20:03:48.286683  474316 node_conditions.go:123] node cpu capacity is 2
	I1027 20:03:48.286696  474316 node_conditions.go:105] duration metric: took 2.681172ms to run NodePressure ...
	I1027 20:03:48.286708  474316 start.go:241] waiting for startup goroutines ...
	I1027 20:03:48.286715  474316 start.go:246] waiting for cluster config update ...
	I1027 20:03:48.286726  474316 start.go:255] writing updated cluster config ...
	I1027 20:03:48.287040  474316 ssh_runner.go:195] Run: rm -f paused
	I1027 20:03:48.407593  474316 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 20:03:48.412953  474316 out.go:179] * Done! kubectl is now configured to use "newest-cni-702588" cluster and "default" namespace by default
	I1027 20:03:43.944510  475934 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-073048" ...
	I1027 20:03:43.944595  475934 cli_runner.go:164] Run: docker start default-k8s-diff-port-073048
	I1027 20:03:44.374054  475934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-073048 --format={{.State.Status}}
	I1027 20:03:44.405561  475934 kic.go:430] container "default-k8s-diff-port-073048" state is running.
	I1027 20:03:44.405957  475934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-073048
	I1027 20:03:44.449512  475934 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/config.json ...
	I1027 20:03:44.449738  475934 machine.go:93] provisionDockerMachine start ...
	I1027 20:03:44.449802  475934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:03:44.481888  475934 main.go:141] libmachine: Using SSH client type: native
	I1027 20:03:44.482207  475934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1027 20:03:44.482216  475934 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 20:03:44.483577  475934 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1027 20:03:47.670596  475934 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-073048
	
	I1027 20:03:47.670673  475934 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-073048"
	I1027 20:03:47.670782  475934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:03:47.699086  475934 main.go:141] libmachine: Using SSH client type: native
	I1027 20:03:47.699380  475934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1027 20:03:47.699391  475934 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-073048 && echo "default-k8s-diff-port-073048" | sudo tee /etc/hostname
	I1027 20:03:47.881587  475934 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-073048
	
	I1027 20:03:47.881661  475934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:03:47.902977  475934 main.go:141] libmachine: Using SSH client type: native
	I1027 20:03:47.903323  475934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1027 20:03:47.903351  475934 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-073048' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-073048/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-073048' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 20:03:48.059589  475934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 20:03:48.059693  475934 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21801-266035/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-266035/.minikube}
	I1027 20:03:48.059728  475934 ubuntu.go:190] setting up certificates
	I1027 20:03:48.059738  475934 provision.go:84] configureAuth start
	I1027 20:03:48.059839  475934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-073048
	I1027 20:03:48.081673  475934 provision.go:143] copyHostCerts
	I1027 20:03:48.081745  475934 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem, removing ...
	I1027 20:03:48.081765  475934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem
	I1027 20:03:48.081851  475934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem (1078 bytes)
	I1027 20:03:48.081962  475934 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem, removing ...
	I1027 20:03:48.081970  475934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem
	I1027 20:03:48.081999  475934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem (1123 bytes)
	I1027 20:03:48.082107  475934 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem, removing ...
	I1027 20:03:48.082118  475934 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem
	I1027 20:03:48.082145  475934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem (1675 bytes)
	I1027 20:03:48.082211  475934 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-073048 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-073048 localhost minikube]
	I1027 20:03:48.519546  475934 provision.go:177] copyRemoteCerts
	I1027 20:03:48.519680  475934 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 20:03:48.519741  475934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:03:48.554944  475934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/default-k8s-diff-port-073048/id_rsa Username:docker}
	I1027 20:03:48.674694  475934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 20:03:48.692859  475934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1027 20:03:48.713897  475934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1027 20:03:48.737836  475934 provision.go:87] duration metric: took 678.068748ms to configureAuth
	I1027 20:03:48.737867  475934 ubuntu.go:206] setting minikube options for container-runtime
	I1027 20:03:48.738118  475934 config.go:182] Loaded profile config "default-k8s-diff-port-073048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:03:48.738275  475934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:03:48.759617  475934 main.go:141] libmachine: Using SSH client type: native
	I1027 20:03:48.759944  475934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1027 20:03:48.760036  475934 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 20:03:49.246376  475934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 20:03:49.246406  475934 machine.go:96] duration metric: took 4.796659492s to provisionDockerMachine
	I1027 20:03:49.246417  475934 start.go:293] postStartSetup for "default-k8s-diff-port-073048" (driver="docker")
	I1027 20:03:49.246428  475934 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 20:03:49.246494  475934 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 20:03:49.246540  475934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:03:49.282810  475934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/default-k8s-diff-port-073048/id_rsa Username:docker}
	I1027 20:03:49.405968  475934 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 20:03:49.410064  475934 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 20:03:49.410096  475934 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 20:03:49.410107  475934 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-266035/.minikube/addons for local assets ...
	I1027 20:03:49.410160  475934 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-266035/.minikube/files for local assets ...
	I1027 20:03:49.410238  475934 filesync.go:149] local asset: /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem -> 2678802.pem in /etc/ssl/certs
	I1027 20:03:49.410340  475934 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 20:03:49.419232  475934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem --> /etc/ssl/certs/2678802.pem (1708 bytes)
	I1027 20:03:49.444631  475934 start.go:296] duration metric: took 198.199674ms for postStartSetup
	I1027 20:03:49.444717  475934 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 20:03:49.444771  475934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:03:49.476358  475934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/default-k8s-diff-port-073048/id_rsa Username:docker}
	I1027 20:03:49.600319  475934 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 20:03:49.610759  475934 fix.go:56] duration metric: took 5.698990093s for fixHost
	I1027 20:03:49.610796  475934 start.go:83] releasing machines lock for "default-k8s-diff-port-073048", held for 5.699057431s
	I1027 20:03:49.610865  475934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-073048
	I1027 20:03:49.634747  475934 ssh_runner.go:195] Run: cat /version.json
	I1027 20:03:49.634804  475934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:03:49.635329  475934 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 20:03:49.635400  475934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:03:49.684000  475934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/default-k8s-diff-port-073048/id_rsa Username:docker}
	I1027 20:03:49.687222  475934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/default-k8s-diff-port-073048/id_rsa Username:docker}
	I1027 20:03:49.905984  475934 ssh_runner.go:195] Run: systemctl --version
	I1027 20:03:49.912792  475934 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 20:03:49.966670  475934 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 20:03:49.971777  475934 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 20:03:49.971932  475934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 20:03:49.985109  475934 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 20:03:49.985185  475934 start.go:495] detecting cgroup driver to use...
	I1027 20:03:49.985252  475934 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1027 20:03:49.985336  475934 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 20:03:50.010750  475934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 20:03:50.029734  475934 docker.go:218] disabling cri-docker service (if available) ...
	I1027 20:03:50.029859  475934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 20:03:50.057013  475934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 20:03:50.079165  475934 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 20:03:50.217217  475934 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 20:03:50.392972  475934 docker.go:234] disabling docker service ...
	I1027 20:03:50.393035  475934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 20:03:50.426106  475934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 20:03:50.445992  475934 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 20:03:50.592981  475934 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 20:03:50.736668  475934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 20:03:50.752010  475934 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 20:03:50.768026  475934 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 20:03:50.768129  475934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:03:50.777003  475934 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 20:03:50.777090  475934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:03:50.785883  475934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:03:50.794657  475934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:03:50.803525  475934 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 20:03:50.811804  475934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:03:50.821076  475934 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:03:50.829683  475934 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:03:50.839838  475934 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 20:03:50.847297  475934 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 20:03:50.854648  475934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:03:50.967326  475934 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 20:03:51.108240  475934 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 20:03:51.108368  475934 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 20:03:51.112809  475934 start.go:563] Will wait 60s for crictl version
	I1027 20:03:51.112914  475934 ssh_runner.go:195] Run: which crictl
	I1027 20:03:51.117099  475934 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 20:03:51.143576  475934 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 20:03:51.143715  475934 ssh_runner.go:195] Run: crio --version
	I1027 20:03:51.178778  475934 ssh_runner.go:195] Run: crio --version
	I1027 20:03:51.214398  475934 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 20:03:51.217477  475934 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-073048 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 20:03:51.233594  475934 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1027 20:03:51.237828  475934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 20:03:51.248045  475934 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-073048 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-073048 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 20:03:51.248172  475934 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 20:03:51.248239  475934 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 20:03:51.281875  475934 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 20:03:51.281901  475934 crio.go:433] Images already preloaded, skipping extraction
	I1027 20:03:51.281970  475934 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 20:03:51.315448  475934 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 20:03:51.315467  475934 cache_images.go:85] Images are preloaded, skipping loading
	I1027 20:03:51.315474  475934 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1027 20:03:51.315567  475934 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-073048 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-073048 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 20:03:51.315653  475934 ssh_runner.go:195] Run: crio config
	I1027 20:03:51.389389  475934 cni.go:84] Creating CNI manager for ""
	I1027 20:03:51.389458  475934 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 20:03:51.389483  475934 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 20:03:51.389511  475934 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-073048 NodeName:default-k8s-diff-port-073048 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 20:03:51.389728  475934 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-073048"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 20:03:51.389833  475934 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 20:03:51.402600  475934 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 20:03:51.403015  475934 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 20:03:51.420270  475934 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1027 20:03:51.434151  475934 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 20:03:51.450690  475934 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1027 20:03:51.466799  475934 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1027 20:03:51.470649  475934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 20:03:51.481660  475934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:03:51.628941  475934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 20:03:51.646613  475934 certs.go:69] Setting up /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048 for IP: 192.168.85.2
	I1027 20:03:51.646631  475934 certs.go:195] generating shared ca certs ...
	I1027 20:03:51.646647  475934 certs.go:227] acquiring lock for ca certs: {Name:mk172548fc54811b51040cc0201d01eb4b3dd19b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:03:51.649744  475934 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21801-266035/.minikube/ca.key
	I1027 20:03:51.649839  475934 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.key
	I1027 20:03:51.649866  475934 certs.go:257] generating profile certs ...
	I1027 20:03:51.649983  475934 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/client.key
	I1027 20:03:51.650063  475934 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/apiserver.key.09593244
	I1027 20:03:51.650122  475934 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/proxy-client.key
	I1027 20:03:51.650270  475934 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880.pem (1338 bytes)
	W1027 20:03:51.650314  475934 certs.go:480] ignoring /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880_empty.pem, impossibly tiny 0 bytes
	I1027 20:03:51.650337  475934 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 20:03:51.650367  475934 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem (1078 bytes)
	I1027 20:03:51.650390  475934 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem (1123 bytes)
	I1027 20:03:51.650428  475934 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem (1675 bytes)
	I1027 20:03:51.650476  475934 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem (1708 bytes)
	I1027 20:03:51.651353  475934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 20:03:51.687313  475934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 20:03:51.717791  475934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 20:03:51.753912  475934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 20:03:51.779996  475934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1027 20:03:51.823529  475934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 20:03:51.859601  475934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 20:03:51.905919  475934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 20:03:51.939192  475934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem --> /usr/share/ca-certificates/2678802.pem (1708 bytes)
	I1027 20:03:51.958091  475934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 20:03:51.981452  475934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880.pem --> /usr/share/ca-certificates/267880.pem (1338 bytes)
	I1027 20:03:52.012992  475934 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 20:03:52.043707  475934 ssh_runner.go:195] Run: openssl version
	I1027 20:03:52.052161  475934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 20:03:52.069119  475934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:03:52.073969  475934 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:03:52.074048  475934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:03:52.137419  475934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 20:03:52.146917  475934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/267880.pem && ln -fs /usr/share/ca-certificates/267880.pem /etc/ssl/certs/267880.pem"
	I1027 20:03:52.162949  475934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/267880.pem
	I1027 20:03:52.167836  475934 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 19:04 /usr/share/ca-certificates/267880.pem
	I1027 20:03:52.167900  475934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/267880.pem
	I1027 20:03:52.213305  475934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/267880.pem /etc/ssl/certs/51391683.0"
	I1027 20:03:52.222214  475934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2678802.pem && ln -fs /usr/share/ca-certificates/2678802.pem /etc/ssl/certs/2678802.pem"
	I1027 20:03:52.232728  475934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2678802.pem
	I1027 20:03:52.237185  475934 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 19:04 /usr/share/ca-certificates/2678802.pem
	I1027 20:03:52.237246  475934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2678802.pem
	I1027 20:03:52.292678  475934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2678802.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 20:03:52.301456  475934 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 20:03:52.308715  475934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 20:03:52.358296  475934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 20:03:52.443850  475934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 20:03:52.496856  475934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 20:03:52.611023  475934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 20:03:52.720821  475934 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 20:03:52.815227  475934 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-073048 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-073048 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 20:03:52.815324  475934 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 20:03:52.815399  475934 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 20:03:52.856538  475934 cri.go:89] found id: "ee7be1ab30d5b150cf1282ac50a0ab38c89e1cf1e5e6081e3e51571014d16a0b"
	I1027 20:03:52.856564  475934 cri.go:89] found id: "420cfe1b91ebd7a517df417e1a1f173e78e7e81a49b61d73bcec204c005ecf7c"
	I1027 20:03:52.856577  475934 cri.go:89] found id: "70203b34337df7979e43ac6fb0f4905a8cdc06feeb089b41d83af7060159d8da"
	I1027 20:03:52.856582  475934 cri.go:89] found id: "47af99655b9f01b93fb734ba61f2da0603ae3f7fa9c76d3c797aeb6e931e722d"
	I1027 20:03:52.856586  475934 cri.go:89] found id: ""
	I1027 20:03:52.856632  475934 ssh_runner.go:195] Run: sudo runc list -f json
	W1027 20:03:52.868447  475934 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T20:03:52Z" level=error msg="open /run/runc: no such file or directory"
	I1027 20:03:52.868539  475934 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 20:03:52.877386  475934 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1027 20:03:52.877409  475934 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1027 20:03:52.877461  475934 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 20:03:52.892838  475934 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 20:03:52.893423  475934 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-073048" does not appear in /home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 20:03:52.893701  475934 kubeconfig.go:62] /home/jenkins/minikube-integration/21801-266035/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-073048" cluster setting kubeconfig missing "default-k8s-diff-port-073048" context setting]
	I1027 20:03:52.894149  475934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/kubeconfig: {Name:mk0a03a594f8d9f9199b1c7cff7c550cec8414c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:03:52.896028  475934 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 20:03:52.907382  475934 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1027 20:03:52.907425  475934 kubeadm.go:601] duration metric: took 30.010641ms to restartPrimaryControlPlane
	I1027 20:03:52.907436  475934 kubeadm.go:402] duration metric: took 92.219669ms to StartCluster
	I1027 20:03:52.907451  475934 settings.go:142] acquiring lock: {Name:mk49b9e2e9b7901c6b51e89b8b1e8ee7f9e88107 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:03:52.907520  475934 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 20:03:52.908587  475934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/kubeconfig: {Name:mk0a03a594f8d9f9199b1c7cff7c550cec8414c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:03:52.908825  475934 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 20:03:52.909190  475934 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 20:03:52.909270  475934 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-073048"
	I1027 20:03:52.909287  475934 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-073048"
	W1027 20:03:52.909293  475934 addons.go:247] addon storage-provisioner should already be in state true
	I1027 20:03:52.909313  475934 host.go:66] Checking if "default-k8s-diff-port-073048" exists ...
	I1027 20:03:52.909944  475934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-073048 --format={{.State.Status}}
	I1027 20:03:52.910334  475934 config.go:182] Loaded profile config "default-k8s-diff-port-073048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:03:52.910427  475934 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-073048"
	I1027 20:03:52.910467  475934 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-073048"
	W1027 20:03:52.910489  475934 addons.go:247] addon dashboard should already be in state true
	I1027 20:03:52.910538  475934 host.go:66] Checking if "default-k8s-diff-port-073048" exists ...
	I1027 20:03:52.911050  475934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-073048 --format={{.State.Status}}
	I1027 20:03:52.917033  475934 out.go:179] * Verifying Kubernetes components...
	I1027 20:03:52.917287  475934 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-073048"
	I1027 20:03:52.917307  475934 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-073048"
	I1027 20:03:52.917638  475934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-073048 --format={{.State.Status}}
	I1027 20:03:52.920355  475934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:03:52.962696  475934 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1027 20:03:52.971394  475934 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1027 20:03:52.977210  475934 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1027 20:03:52.977237  475934 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1027 20:03:52.977303  475934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:03:52.977458  475934 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 20:03:52.978949  475934 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-073048"
	W1027 20:03:52.978967  475934 addons.go:247] addon default-storageclass should already be in state true
	I1027 20:03:52.979164  475934 host.go:66] Checking if "default-k8s-diff-port-073048" exists ...
	I1027 20:03:52.979601  475934 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-073048 --format={{.State.Status}}
	I1027 20:03:52.980699  475934 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 20:03:52.980721  475934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 20:03:52.980770  475934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:03:53.022659  475934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/default-k8s-diff-port-073048/id_rsa Username:docker}
	I1027 20:03:53.032393  475934 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 20:03:53.032412  475934 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 20:03:53.032475  475934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:03:53.043361  475934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/default-k8s-diff-port-073048/id_rsa Username:docker}
	I1027 20:03:53.072955  475934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/default-k8s-diff-port-073048/id_rsa Username:docker}
	I1027 20:03:53.369326  475934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 20:03:53.384231  475934 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1027 20:03:53.384253  475934 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1027 20:03:53.428344  475934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 20:03:53.467653  475934 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-073048" to be "Ready" ...
	I1027 20:03:53.472489  475934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 20:03:53.474510  475934 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1027 20:03:53.474529  475934 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1027 20:03:53.533670  475934 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1027 20:03:53.533697  475934 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	
	
	==> CRI-O <==
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.712339081Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.720449091Z" level=info msg="Running pod sandbox: kube-system/kindnet-7ctmm/POD" id=8ebb7cf5-a1cd-4278-8dbe-04c98c167ae3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.720515296Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.723739272Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=5041df47-f104-4e14-9abd-7da5da36e687 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.735732783Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=8ebb7cf5-a1cd-4278-8dbe-04c98c167ae3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.738877606Z" level=info msg="Ran pod sandbox b6df27157f4fb552556df219139a41388f68c93e90b0f6bf8a1bbd8ef7ae001e with infra container: kube-system/kindnet-7ctmm/POD" id=8ebb7cf5-a1cd-4278-8dbe-04c98c167ae3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.743619364Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=58ff6189-e930-47e0-b145-b6db6214b926 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.748068356Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=81fcac47-e054-454f-9aa8-54d3460be5f9 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.753358908Z" level=info msg="Creating container: kube-system/kindnet-7ctmm/kindnet-cni" id=83d61062-6683-4dbf-84d9-6a6aff680867 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.753593428Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.756716787Z" level=info msg="Ran pod sandbox 3a57f0ae66f27e1bce32f99c1837f26a5d8b2b06f034a0c917c4dd9011372b52 with infra container: kube-system/kube-proxy-k9lhg/POD" id=5041df47-f104-4e14-9abd-7da5da36e687 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.757924302Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=408aa198-e9a7-42eb-b621-0566c226e1c0 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.759813726Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=6c71ab18-a600-48cc-9d89-8c5fbff56691 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.762812732Z" level=info msg="Creating container: kube-system/kube-proxy-k9lhg/kube-proxy" id=ee4eef20-dc23-44b2-8e62-bd9822c86655 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.763313551Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.773916135Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.779342806Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.781703298Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.785420051Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.838579633Z" level=info msg="Created container a70a228324365c1e561d8cfe7dd7a2028a7e30509835bde732e44b3427b70131: kube-system/kube-proxy-k9lhg/kube-proxy" id=ee4eef20-dc23-44b2-8e62-bd9822c86655 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.838860946Z" level=info msg="Created container 20f5ab471ade7a086d7d0e7f822d761ca44030cdc590decc67a3adffd147ad8f: kube-system/kindnet-7ctmm/kindnet-cni" id=83d61062-6683-4dbf-84d9-6a6aff680867 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.83992481Z" level=info msg="Starting container: a70a228324365c1e561d8cfe7dd7a2028a7e30509835bde732e44b3427b70131" id=a2d9af65-fdc7-49a5-844a-4ec87eda2aac name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.83997024Z" level=info msg="Starting container: 20f5ab471ade7a086d7d0e7f822d761ca44030cdc590decc67a3adffd147ad8f" id=ee82222a-e4e9-4d2d-b5e2-7f6af3300119 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.842203154Z" level=info msg="Started container" PID=1055 containerID=20f5ab471ade7a086d7d0e7f822d761ca44030cdc590decc67a3adffd147ad8f description=kube-system/kindnet-7ctmm/kindnet-cni id=ee82222a-e4e9-4d2d-b5e2-7f6af3300119 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b6df27157f4fb552556df219139a41388f68c93e90b0f6bf8a1bbd8ef7ae001e
	Oct 27 20:03:46 newest-cni-702588 crio[610]: time="2025-10-27T20:03:46.852230796Z" level=info msg="Started container" PID=1060 containerID=a70a228324365c1e561d8cfe7dd7a2028a7e30509835bde732e44b3427b70131 description=kube-system/kube-proxy-k9lhg/kube-proxy id=a2d9af65-fdc7-49a5-844a-4ec87eda2aac name=/runtime.v1.RuntimeService/StartContainer sandboxID=3a57f0ae66f27e1bce32f99c1837f26a5d8b2b06f034a0c917c4dd9011372b52
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	a70a228324365       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   9 seconds ago       Running             kube-proxy                1                   3a57f0ae66f27       kube-proxy-k9lhg                            kube-system
	20f5ab471ade7       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   9 seconds ago       Running             kindnet-cni               1                   b6df27157f4fb       kindnet-7ctmm                               kube-system
	50e1303f4abc6       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   16 seconds ago      Running             kube-scheduler            1                   1c0069033e5d6       kube-scheduler-newest-cni-702588            kube-system
	8d587d21ae021       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   16 seconds ago      Running             kube-controller-manager   1                   c71c38a730a67       kube-controller-manager-newest-cni-702588   kube-system
	72abb4993a475       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   16 seconds ago      Running             kube-apiserver            1                   ae27e5ca1a858       kube-apiserver-newest-cni-702588            kube-system
	f8fab5749ac40       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   16 seconds ago      Running             etcd                      1                   ce0b0cdef8fd6       etcd-newest-cni-702588                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-702588
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-702588
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=newest-cni-702588
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T20_03_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 20:03:18 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-702588
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 20:03:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 20:03:46 +0000   Mon, 27 Oct 2025 20:03:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 20:03:46 +0000   Mon, 27 Oct 2025 20:03:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 20:03:46 +0000   Mon, 27 Oct 2025 20:03:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 27 Oct 2025 20:03:46 +0000   Mon, 27 Oct 2025 20:03:13 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-702588
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                4ba06414-4234-4b37-9dae-dda0eb66f304
	  Boot ID:                    23ea05b4-8203-4fb9-a84a-5deb4b091cbb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-702588                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         35s
	  kube-system                 kindnet-7ctmm                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-newest-cni-702588             250m (12%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-newest-cni-702588    200m (10%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-k9lhg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-newest-cni-702588             100m (5%)     0 (0%)      0 (0%)           0 (0%)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 27s                kube-proxy       
	  Normal   Starting                 8s                 kube-proxy       
	  Warning  CgroupV1                 44s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  44s (x9 over 44s)  kubelet          Node newest-cni-702588 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    44s (x8 over 44s)  kubelet          Node newest-cni-702588 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     44s (x7 over 44s)  kubelet          Node newest-cni-702588 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  35s                kubelet          Node newest-cni-702588 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 35s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    35s                kubelet          Node newest-cni-702588 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     35s                kubelet          Node newest-cni-702588 status is now: NodeHasSufficientPID
	  Normal   Starting                 35s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           31s                node-controller  Node newest-cni-702588 event: Registered Node newest-cni-702588 in Controller
	  Normal   Starting                 17s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 17s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  17s (x8 over 17s)  kubelet          Node newest-cni-702588 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17s (x8 over 17s)  kubelet          Node newest-cni-702588 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17s (x8 over 17s)  kubelet          Node newest-cni-702588 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7s                 node-controller  Node newest-cni-702588 event: Registered Node newest-cni-702588 in Controller
	
	
	==> dmesg <==
	[  +7.015891] overlayfs: idmapped layers are currently not supported
	[Oct27 19:41] overlayfs: idmapped layers are currently not supported
	[ +28.455216] overlayfs: idmapped layers are currently not supported
	[Oct27 19:42] overlayfs: idmapped layers are currently not supported
	[ +26.490174] overlayfs: idmapped layers are currently not supported
	[Oct27 19:44] overlayfs: idmapped layers are currently not supported
	[Oct27 19:45] overlayfs: idmapped layers are currently not supported
	[Oct27 19:47] overlayfs: idmapped layers are currently not supported
	[Oct27 19:49] overlayfs: idmapped layers are currently not supported
	[ +31.410335] overlayfs: idmapped layers are currently not supported
	[Oct27 19:51] overlayfs: idmapped layers are currently not supported
	[Oct27 19:53] overlayfs: idmapped layers are currently not supported
	[Oct27 19:54] overlayfs: idmapped layers are currently not supported
	[Oct27 19:55] overlayfs: idmapped layers are currently not supported
	[Oct27 19:56] overlayfs: idmapped layers are currently not supported
	[Oct27 19:57] overlayfs: idmapped layers are currently not supported
	[Oct27 19:58] overlayfs: idmapped layers are currently not supported
	[Oct27 19:59] overlayfs: idmapped layers are currently not supported
	[Oct27 20:00] overlayfs: idmapped layers are currently not supported
	[ +41.321877] overlayfs: idmapped layers are currently not supported
	[Oct27 20:01] overlayfs: idmapped layers are currently not supported
	[Oct27 20:02] overlayfs: idmapped layers are currently not supported
	[Oct27 20:03] overlayfs: idmapped layers are currently not supported
	[ +26.735505] overlayfs: idmapped layers are currently not supported
	[ +12.481352] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [f8fab5749ac40e20fd27699d7357e78ed023c1efecb49166aa43e7474e86b557] <==
	{"level":"warn","ts":"2025-10-27T20:03:43.550597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:43.572783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:43.600339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:43.659272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:43.681954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:43.739206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:43.806200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:43.909622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:43.915136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:43.965253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:43.991617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:44.032678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:44.048273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:44.066740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:44.121125Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:44.176186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:44.236740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:44.302048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:44.353725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:44.414281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:44.458622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:44.520073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:44.566522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:44.593988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:44.763955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50534","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:03:56 up  2:46,  0 user,  load average: 4.84, 3.45, 2.83
	Linux newest-cni-702588 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [20f5ab471ade7a086d7d0e7f822d761ca44030cdc590decc67a3adffd147ad8f] <==
	I1027 20:03:46.938531       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 20:03:46.938837       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1027 20:03:46.939095       1 main.go:148] setting mtu 1500 for CNI 
	I1027 20:03:46.939124       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 20:03:46.939135       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T20:03:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 20:03:47.149264       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 20:03:47.149357       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 20:03:47.149390       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 20:03:47.151078       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [72abb4993a475217f2a8c95e18b5ddddfda40e2dc2e0cf42b38d6d9ef04c3a63] <==
	I1027 20:03:46.371214       1 aggregator.go:171] initial CRD sync complete...
	I1027 20:03:46.371226       1 autoregister_controller.go:144] Starting autoregister controller
	I1027 20:03:46.371234       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 20:03:46.371240       1 cache.go:39] Caches are synced for autoregister controller
	I1027 20:03:46.388061       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1027 20:03:46.388117       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 20:03:46.388143       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 20:03:46.421127       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1027 20:03:46.421257       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1027 20:03:46.424890       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1027 20:03:46.424943       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1027 20:03:46.439599       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1027 20:03:46.467010       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1027 20:03:46.607131       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 20:03:46.833264       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 20:03:47.074522       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 20:03:47.226970       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 20:03:47.352787       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 20:03:47.375467       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 20:03:47.697449       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.224.93"}
	I1027 20:03:47.739895       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.237.64"}
	I1027 20:03:49.879687       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 20:03:49.930721       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 20:03:50.073932       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 20:03:50.221345       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [8d587d21ae021230e28963c8c71ea231d0d95d971ef978ac498061c57511609b] <==
	I1027 20:03:49.677054       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 20:03:49.677104       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 20:03:49.677169       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-702588"
	I1027 20:03:49.677200       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1027 20:03:49.677292       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1027 20:03:49.677440       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1027 20:03:49.678304       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1027 20:03:49.678360       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 20:03:49.678402       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1027 20:03:49.678460       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1027 20:03:49.678655       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 20:03:49.694419       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 20:03:49.694715       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1027 20:03:49.694837       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1027 20:03:49.694859       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 20:03:49.694935       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 20:03:49.712563       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 20:03:49.712742       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 20:03:49.713000       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 20:03:49.714267       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1027 20:03:49.714417       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1027 20:03:49.717720       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 20:03:49.719155       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 20:03:49.720391       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 20:03:49.726215       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	
	
	==> kube-proxy [a70a228324365c1e561d8cfe7dd7a2028a7e30509835bde732e44b3427b70131] <==
	I1027 20:03:47.425988       1 server_linux.go:53] "Using iptables proxy"
	I1027 20:03:47.663894       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 20:03:47.766228       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 20:03:47.766356       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1027 20:03:47.766506       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 20:03:47.825621       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 20:03:47.825746       1 server_linux.go:132] "Using iptables Proxier"
	I1027 20:03:47.829705       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 20:03:47.830192       1 server.go:527] "Version info" version="v1.34.1"
	I1027 20:03:47.830463       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 20:03:47.832326       1 config.go:200] "Starting service config controller"
	I1027 20:03:47.834833       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 20:03:47.832630       1 config.go:106] "Starting endpoint slice config controller"
	I1027 20:03:47.835292       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 20:03:47.832649       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 20:03:47.835410       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 20:03:47.835470       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 20:03:47.833745       1 config.go:309] "Starting node config controller"
	I1027 20:03:47.835550       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 20:03:47.835592       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 20:03:47.935240       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 20:03:47.935454       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [50e1303f4abc6c0afd542eb75bdf66db50d8a97bbe1ab2251b60144ee40ebbdf] <==
	I1027 20:03:42.539104       1 serving.go:386] Generated self-signed cert in-memory
	I1027 20:03:47.534507       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 20:03:47.534966       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 20:03:47.609503       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 20:03:47.609689       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1027 20:03:47.609712       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1027 20:03:47.609740       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 20:03:47.612823       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 20:03:47.612843       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 20:03:47.612862       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 20:03:47.612867       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 20:03:47.710450       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1027 20:03:47.714932       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 20:03:47.715269       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: I1027 20:03:46.389792     726 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-702588"
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: I1027 20:03:46.389889     726 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-702588"
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: I1027 20:03:46.389916     726 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: I1027 20:03:46.390803     726 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: I1027 20:03:46.399082     726 apiserver.go:52] "Watching apiserver"
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: I1027 20:03:46.399349     726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-702588"
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: E1027 20:03:46.449043     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-702588\" already exists" pod="kube-system/kube-scheduler-newest-cni-702588"
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: I1027 20:03:46.449077     726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-702588"
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: E1027 20:03:46.449296     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-702588\" already exists" pod="kube-system/kube-scheduler-newest-cni-702588"
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: E1027 20:03:46.477957     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-702588\" already exists" pod="kube-system/etcd-newest-cni-702588"
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: I1027 20:03:46.477992     726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-702588"
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: I1027 20:03:46.487719     726 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: E1027 20:03:46.540982     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-702588\" already exists" pod="kube-system/kube-apiserver-newest-cni-702588"
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: I1027 20:03:46.541015     726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-702588"
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: E1027 20:03:46.577228     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-702588\" already exists" pod="kube-system/kube-controller-manager-newest-cni-702588"
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: I1027 20:03:46.584777     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98e70164-cd51-4563-91d0-7c0bae3c2ade-lib-modules\") pod \"kindnet-7ctmm\" (UID: \"98e70164-cd51-4563-91d0-7c0bae3c2ade\") " pod="kube-system/kindnet-7ctmm"
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: I1027 20:03:46.584855     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f36ed32e-d331-485d-ba07-01353f65e231-xtables-lock\") pod \"kube-proxy-k9lhg\" (UID: \"f36ed32e-d331-485d-ba07-01353f65e231\") " pod="kube-system/kube-proxy-k9lhg"
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: I1027 20:03:46.584881     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f36ed32e-d331-485d-ba07-01353f65e231-lib-modules\") pod \"kube-proxy-k9lhg\" (UID: \"f36ed32e-d331-485d-ba07-01353f65e231\") " pod="kube-system/kube-proxy-k9lhg"
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: I1027 20:03:46.584899     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/98e70164-cd51-4563-91d0-7c0bae3c2ade-cni-cfg\") pod \"kindnet-7ctmm\" (UID: \"98e70164-cd51-4563-91d0-7c0bae3c2ade\") " pod="kube-system/kindnet-7ctmm"
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: I1027 20:03:46.584920     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98e70164-cd51-4563-91d0-7c0bae3c2ade-xtables-lock\") pod \"kindnet-7ctmm\" (UID: \"98e70164-cd51-4563-91d0-7c0bae3c2ade\") " pod="kube-system/kindnet-7ctmm"
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: I1027 20:03:46.647702     726 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 27 20:03:46 newest-cni-702588 kubelet[726]: W1027 20:03:46.751571     726 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/129b04b839d9dc50e5b41cb4e74b918aa12b2f3b4e01d95ffdf708dd0ebe16e9/crio-3a57f0ae66f27e1bce32f99c1837f26a5d8b2b06f034a0c917c4dd9011372b52 WatchSource:0}: Error finding container 3a57f0ae66f27e1bce32f99c1837f26a5d8b2b06f034a0c917c4dd9011372b52: Status 404 returned error can't find the container with id 3a57f0ae66f27e1bce32f99c1837f26a5d8b2b06f034a0c917c4dd9011372b52
	Oct 27 20:03:49 newest-cni-702588 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 20:03:49 newest-cni-702588 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 20:03:49 newest-cni-702588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-702588 -n newest-cni-702588
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-702588 -n newest-cni-702588: exit status 2 (542.081108ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-702588 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-xclwd storage-provisioner dashboard-metrics-scraper-6ffb444bf9-whld8 kubernetes-dashboard-855c9754f9-bl8m9
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-702588 describe pod coredns-66bc5c9577-xclwd storage-provisioner dashboard-metrics-scraper-6ffb444bf9-whld8 kubernetes-dashboard-855c9754f9-bl8m9
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-702588 describe pod coredns-66bc5c9577-xclwd storage-provisioner dashboard-metrics-scraper-6ffb444bf9-whld8 kubernetes-dashboard-855c9754f9-bl8m9: exit status 1 (154.629561ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-xclwd" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-whld8" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-bl8m9" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-702588 describe pod coredns-66bc5c9577-xclwd storage-provisioner dashboard-metrics-scraper-6ffb444bf9-whld8 kubernetes-dashboard-855c9754f9-bl8m9: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (8.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (8.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-073048 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-073048 --alsologtostderr -v=1: exit status 80 (2.145623429s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-073048 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 20:04:57.629277  482447 out.go:360] Setting OutFile to fd 1 ...
	I1027 20:04:57.629437  482447 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:04:57.629467  482447 out.go:374] Setting ErrFile to fd 2...
	I1027 20:04:57.629488  482447 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:04:57.629773  482447 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 20:04:57.630146  482447 out.go:368] Setting JSON to false
	I1027 20:04:57.630201  482447 mustload.go:65] Loading cluster: default-k8s-diff-port-073048
	I1027 20:04:57.630705  482447 config.go:182] Loaded profile config "default-k8s-diff-port-073048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:04:57.632546  482447 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-073048 --format={{.State.Status}}
	I1027 20:04:57.654506  482447 host.go:66] Checking if "default-k8s-diff-port-073048" exists ...
	I1027 20:04:57.654916  482447 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 20:04:57.721742  482447 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-27 20:04:57.710519723 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 20:04:57.722536  482447 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-073048 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1027 20:04:57.726143  482447 out.go:179] * Pausing node default-k8s-diff-port-073048 ... 
	I1027 20:04:57.729124  482447 host.go:66] Checking if "default-k8s-diff-port-073048" exists ...
	I1027 20:04:57.729544  482447 ssh_runner.go:195] Run: systemctl --version
	I1027 20:04:57.729600  482447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-073048
	I1027 20:04:57.747351  482447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/default-k8s-diff-port-073048/id_rsa Username:docker}
	I1027 20:04:57.862842  482447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 20:04:57.878195  482447 pause.go:52] kubelet running: true
	I1027 20:04:57.878290  482447 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 20:04:58.178171  482447 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 20:04:58.178274  482447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 20:04:58.248338  482447 cri.go:89] found id: "ca3df37f2ff7a2e1fc49a86c023182e51049920e03c757f37b9467c42e204794"
	I1027 20:04:58.248360  482447 cri.go:89] found id: "33d4c8937c642e9e870f4db040cb94a4ed803df3edc604abb12f7c56ef7a0d44"
	I1027 20:04:58.248366  482447 cri.go:89] found id: "392cb2b4d36cd351dcd3237b475b909427deb573bd6d67128500918d61224f5a"
	I1027 20:04:58.248370  482447 cri.go:89] found id: "4a9c8f23bb6399f3527d0263bd18f2693466d4304ee4f8059f1eb907bc160eab"
	I1027 20:04:58.248373  482447 cri.go:89] found id: "ba49fcd5f05b897c504ab81db54a96c17c13d29d0b5bac3058cf7c87bc70aa26"
	I1027 20:04:58.248377  482447 cri.go:89] found id: "ee7be1ab30d5b150cf1282ac50a0ab38c89e1cf1e5e6081e3e51571014d16a0b"
	I1027 20:04:58.248379  482447 cri.go:89] found id: "420cfe1b91ebd7a517df417e1a1f173e78e7e81a49b61d73bcec204c005ecf7c"
	I1027 20:04:58.248382  482447 cri.go:89] found id: "70203b34337df7979e43ac6fb0f4905a8cdc06feeb089b41d83af7060159d8da"
	I1027 20:04:58.248387  482447 cri.go:89] found id: "47af99655b9f01b93fb734ba61f2da0603ae3f7fa9c76d3c797aeb6e931e722d"
	I1027 20:04:58.248393  482447 cri.go:89] found id: "a1c86c4e14c18a2f8229ec50c7b75a1fc3ad0e89f2e341db55d3435d58734e1e"
	I1027 20:04:58.248397  482447 cri.go:89] found id: "2115489a06bb5e5da51f7bed723595a80cff36c03176614c6ce4478ce4e468e7"
	I1027 20:04:58.248400  482447 cri.go:89] found id: "7c44bc52e5ff96e93a4c96064dd09128b39f114debcf592f489e9ef5f042766b"
	I1027 20:04:58.248404  482447 cri.go:89] found id: ""
	I1027 20:04:58.248454  482447 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 20:04:58.267653  482447 retry.go:31] will retry after 213.476733ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T20:04:58Z" level=error msg="open /run/runc: no such file or directory"
	I1027 20:04:58.482175  482447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 20:04:58.495257  482447 pause.go:52] kubelet running: false
	I1027 20:04:58.495321  482447 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 20:04:58.746824  482447 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 20:04:58.746902  482447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 20:04:58.841091  482447 cri.go:89] found id: "ca3df37f2ff7a2e1fc49a86c023182e51049920e03c757f37b9467c42e204794"
	I1027 20:04:58.841112  482447 cri.go:89] found id: "33d4c8937c642e9e870f4db040cb94a4ed803df3edc604abb12f7c56ef7a0d44"
	I1027 20:04:58.841117  482447 cri.go:89] found id: "392cb2b4d36cd351dcd3237b475b909427deb573bd6d67128500918d61224f5a"
	I1027 20:04:58.841121  482447 cri.go:89] found id: "4a9c8f23bb6399f3527d0263bd18f2693466d4304ee4f8059f1eb907bc160eab"
	I1027 20:04:58.841124  482447 cri.go:89] found id: "ba49fcd5f05b897c504ab81db54a96c17c13d29d0b5bac3058cf7c87bc70aa26"
	I1027 20:04:58.841128  482447 cri.go:89] found id: "ee7be1ab30d5b150cf1282ac50a0ab38c89e1cf1e5e6081e3e51571014d16a0b"
	I1027 20:04:58.841131  482447 cri.go:89] found id: "420cfe1b91ebd7a517df417e1a1f173e78e7e81a49b61d73bcec204c005ecf7c"
	I1027 20:04:58.841134  482447 cri.go:89] found id: "70203b34337df7979e43ac6fb0f4905a8cdc06feeb089b41d83af7060159d8da"
	I1027 20:04:58.841137  482447 cri.go:89] found id: "47af99655b9f01b93fb734ba61f2da0603ae3f7fa9c76d3c797aeb6e931e722d"
	I1027 20:04:58.841142  482447 cri.go:89] found id: "a1c86c4e14c18a2f8229ec50c7b75a1fc3ad0e89f2e341db55d3435d58734e1e"
	I1027 20:04:58.841146  482447 cri.go:89] found id: "2115489a06bb5e5da51f7bed723595a80cff36c03176614c6ce4478ce4e468e7"
	I1027 20:04:58.841149  482447 cri.go:89] found id: "7c44bc52e5ff96e93a4c96064dd09128b39f114debcf592f489e9ef5f042766b"
	I1027 20:04:58.841151  482447 cri.go:89] found id: ""
	I1027 20:04:58.841208  482447 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 20:04:58.865171  482447 retry.go:31] will retry after 553.09438ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T20:04:58Z" level=error msg="open /run/runc: no such file or directory"
	I1027 20:04:59.418521  482447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 20:04:59.432249  482447 pause.go:52] kubelet running: false
	I1027 20:04:59.432316  482447 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 20:04:59.609375  482447 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 20:04:59.609463  482447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 20:04:59.687245  482447 cri.go:89] found id: "ca3df37f2ff7a2e1fc49a86c023182e51049920e03c757f37b9467c42e204794"
	I1027 20:04:59.687275  482447 cri.go:89] found id: "33d4c8937c642e9e870f4db040cb94a4ed803df3edc604abb12f7c56ef7a0d44"
	I1027 20:04:59.687281  482447 cri.go:89] found id: "392cb2b4d36cd351dcd3237b475b909427deb573bd6d67128500918d61224f5a"
	I1027 20:04:59.687285  482447 cri.go:89] found id: "4a9c8f23bb6399f3527d0263bd18f2693466d4304ee4f8059f1eb907bc160eab"
	I1027 20:04:59.687288  482447 cri.go:89] found id: "ba49fcd5f05b897c504ab81db54a96c17c13d29d0b5bac3058cf7c87bc70aa26"
	I1027 20:04:59.687292  482447 cri.go:89] found id: "ee7be1ab30d5b150cf1282ac50a0ab38c89e1cf1e5e6081e3e51571014d16a0b"
	I1027 20:04:59.687295  482447 cri.go:89] found id: "420cfe1b91ebd7a517df417e1a1f173e78e7e81a49b61d73bcec204c005ecf7c"
	I1027 20:04:59.687299  482447 cri.go:89] found id: "70203b34337df7979e43ac6fb0f4905a8cdc06feeb089b41d83af7060159d8da"
	I1027 20:04:59.687302  482447 cri.go:89] found id: "47af99655b9f01b93fb734ba61f2da0603ae3f7fa9c76d3c797aeb6e931e722d"
	I1027 20:04:59.687314  482447 cri.go:89] found id: "a1c86c4e14c18a2f8229ec50c7b75a1fc3ad0e89f2e341db55d3435d58734e1e"
	I1027 20:04:59.687322  482447 cri.go:89] found id: "2115489a06bb5e5da51f7bed723595a80cff36c03176614c6ce4478ce4e468e7"
	I1027 20:04:59.687325  482447 cri.go:89] found id: "7c44bc52e5ff96e93a4c96064dd09128b39f114debcf592f489e9ef5f042766b"
	I1027 20:04:59.687351  482447 cri.go:89] found id: ""
	I1027 20:04:59.687435  482447 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 20:04:59.702476  482447 out.go:203] 
	W1027 20:04:59.705350  482447 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T20:04:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T20:04:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 20:04:59.705375  482447 out.go:285] * 
	* 
	W1027 20:04:59.712194  482447 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 20:04:59.715432  482447 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-073048 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-073048
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-073048:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0d0a6d2c139cfccb44f717c1d4bf1de32b26fb3f98fc7320c9146a486f5ddfbb",
	        "Created": "2025-10-27T20:02:05.981897269Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 476059,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T20:03:43.990427612Z",
	            "FinishedAt": "2025-10-27T20:03:42.766648605Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/0d0a6d2c139cfccb44f717c1d4bf1de32b26fb3f98fc7320c9146a486f5ddfbb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0d0a6d2c139cfccb44f717c1d4bf1de32b26fb3f98fc7320c9146a486f5ddfbb/hostname",
	        "HostsPath": "/var/lib/docker/containers/0d0a6d2c139cfccb44f717c1d4bf1de32b26fb3f98fc7320c9146a486f5ddfbb/hosts",
	        "LogPath": "/var/lib/docker/containers/0d0a6d2c139cfccb44f717c1d4bf1de32b26fb3f98fc7320c9146a486f5ddfbb/0d0a6d2c139cfccb44f717c1d4bf1de32b26fb3f98fc7320c9146a486f5ddfbb-json.log",
	        "Name": "/default-k8s-diff-port-073048",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-073048:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-073048",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0d0a6d2c139cfccb44f717c1d4bf1de32b26fb3f98fc7320c9146a486f5ddfbb",
	                "LowerDir": "/var/lib/docker/overlay2/7b35ec25c436fa4e076bd86f8611b944f0fb1a3f41e654812ca0a32f05e4bb57-init/diff:/var/lib/docker/overlay2/0567218bbe05e8b59b2aef7ad82032d6716040c1f9d9d73e7de8682101079f76/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7b35ec25c436fa4e076bd86f8611b944f0fb1a3f41e654812ca0a32f05e4bb57/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7b35ec25c436fa4e076bd86f8611b944f0fb1a3f41e654812ca0a32f05e4bb57/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7b35ec25c436fa4e076bd86f8611b944f0fb1a3f41e654812ca0a32f05e4bb57/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-073048",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-073048/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-073048",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-073048",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-073048",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "91263f5cf3ef077725086a102c438818557a85ae26f91c4751784162e0b1d10d",
	            "SandboxKey": "/var/run/docker/netns/91263f5cf3ef",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-073048": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:dd:1a:1a:95:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "693360b70a0a6dc4cb15a9fc19e2d3b83d1fde9de38ebc7c4ce28555e19407c1",
	                    "EndpointID": "de4961e59fcb32da19ce4be6e3743ffd1514f92f86f4d3e01a8747fc10ff25eb",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-073048",
	                        "0d0a6d2c139c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-073048 -n default-k8s-diff-port-073048
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-073048 -n default-k8s-diff-port-073048: exit status 2 (631.068265ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-073048 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-073048 logs -n 25: (1.544201143s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p no-preload-300878 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │                     │
	│ delete  │ -p no-preload-300878                                                                                                                                                                                                                          │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:01 UTC │
	│ delete  │ -p no-preload-300878                                                                                                                                                                                                                          │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:01 UTC │
	│ delete  │ -p disable-driver-mounts-230052                                                                                                                                                                                                               │ disable-driver-mounts-230052 │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │ 27 Oct 25 20:02 UTC │
	│ start   │ -p default-k8s-diff-port-073048 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-073048 │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │ 27 Oct 25 20:03 UTC │
	│ image   │ embed-certs-629838 image list --format=json                                                                                                                                                                                                   │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │ 27 Oct 25 20:02 UTC │
	│ pause   │ -p embed-certs-629838 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │                     │
	│ delete  │ -p embed-certs-629838                                                                                                                                                                                                                         │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │ 27 Oct 25 20:02 UTC │
	│ delete  │ -p embed-certs-629838                                                                                                                                                                                                                         │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │ 27 Oct 25 20:02 UTC │
	│ start   │ -p newest-cni-702588 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-702588            │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │ 27 Oct 25 20:03 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-073048 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-073048 │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-702588 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-702588            │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │                     │
	│ stop    │ -p newest-cni-702588 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-702588            │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │ 27 Oct 25 20:03 UTC │
	│ stop    │ -p default-k8s-diff-port-073048 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-073048 │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │ 27 Oct 25 20:03 UTC │
	│ addons  │ enable dashboard -p newest-cni-702588 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-702588            │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │ 27 Oct 25 20:03 UTC │
	│ start   │ -p newest-cni-702588 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-702588            │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │ 27 Oct 25 20:03 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-073048 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-073048 │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │ 27 Oct 25 20:03 UTC │
	│ start   │ -p default-k8s-diff-port-073048 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-073048 │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │ 27 Oct 25 20:04 UTC │
	│ image   │ newest-cni-702588 image list --format=json                                                                                                                                                                                                    │ newest-cni-702588            │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │ 27 Oct 25 20:03 UTC │
	│ pause   │ -p newest-cni-702588 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-702588            │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │                     │
	│ delete  │ -p newest-cni-702588                                                                                                                                                                                                                          │ newest-cni-702588            │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │ 27 Oct 25 20:04 UTC │
	│ delete  │ -p newest-cni-702588                                                                                                                                                                                                                          │ newest-cni-702588            │ jenkins │ v1.37.0 │ 27 Oct 25 20:04 UTC │ 27 Oct 25 20:04 UTC │
	│ start   │ -p custom-flannel-750423 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio                                                                            │ custom-flannel-750423        │ jenkins │ v1.37.0 │ 27 Oct 25 20:04 UTC │                     │
	│ image   │ default-k8s-diff-port-073048 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-073048 │ jenkins │ v1.37.0 │ 27 Oct 25 20:04 UTC │ 27 Oct 25 20:04 UTC │
	│ pause   │ -p default-k8s-diff-port-073048 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-073048 │ jenkins │ v1.37.0 │ 27 Oct 25 20:04 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 20:04:00
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 20:04:00.783019  479026 out.go:360] Setting OutFile to fd 1 ...
	I1027 20:04:00.783184  479026 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:04:00.783196  479026 out.go:374] Setting ErrFile to fd 2...
	I1027 20:04:00.783202  479026 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:04:00.783463  479026 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 20:04:00.783896  479026 out.go:368] Setting JSON to false
	I1027 20:04:00.784883  479026 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9993,"bootTime":1761585448,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1027 20:04:00.784954  479026 start.go:141] virtualization:  
	I1027 20:04:00.788356  479026 out.go:179] * [custom-flannel-750423] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 20:04:00.791347  479026 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 20:04:00.791462  479026 notify.go:220] Checking for updates...
	I1027 20:04:00.797197  479026 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 20:04:00.800178  479026 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 20:04:00.803142  479026 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-266035/.minikube
	I1027 20:04:00.805935  479026 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 20:04:00.808800  479026 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 20:04:00.812237  479026 config.go:182] Loaded profile config "default-k8s-diff-port-073048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:04:00.812349  479026 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 20:04:00.860646  479026 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 20:04:00.860765  479026 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 20:04:00.967018  479026 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-27 20:04:00.957932285 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 20:04:00.967126  479026 docker.go:318] overlay module found
	I1027 20:04:00.970325  479026 out.go:179] * Using the docker driver based on user configuration
	I1027 20:04:00.973105  479026 start.go:305] selected driver: docker
	I1027 20:04:00.973130  479026 start.go:925] validating driver "docker" against <nil>
	I1027 20:04:00.973146  479026 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 20:04:00.973887  479026 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 20:04:01.083634  479026 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-27 20:04:01.070167043 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 20:04:01.083786  479026 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1027 20:04:01.084015  479026 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 20:04:01.086876  479026 out.go:179] * Using Docker driver with root privileges
	I1027 20:04:01.089654  479026 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1027 20:04:01.089694  479026 start_flags.go:336] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1027 20:04:01.089780  479026 start.go:349] cluster config:
	{Name:custom-flannel-750423 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-750423 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 20:04:01.092948  479026 out.go:179] * Starting "custom-flannel-750423" primary control-plane node in "custom-flannel-750423" cluster
	I1027 20:04:01.095758  479026 cache.go:123] Beginning downloading kic base image for docker with crio
	I1027 20:04:01.098593  479026 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 20:04:01.100659  479026 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 20:04:01.100724  479026 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1027 20:04:01.100738  479026 cache.go:58] Caching tarball of preloaded images
	I1027 20:04:01.100827  479026 preload.go:233] Found /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1027 20:04:01.100862  479026 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 20:04:01.100973  479026 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/config.json ...
	I1027 20:04:01.100998  479026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/config.json: {Name:mk725109b4ba9ee7f5cef92c60e855205159cccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:04:01.101165  479026 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 20:04:01.131715  479026 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 20:04:01.131745  479026 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 20:04:01.131765  479026 cache.go:232] Successfully downloaded all kic artifacts
	I1027 20:04:01.131789  479026 start.go:360] acquireMachinesLock for custom-flannel-750423: {Name:mked453956a4756e2adaba8128a6230e7dd0be3c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 20:04:01.131901  479026 start.go:364] duration metric: took 89.86µs to acquireMachinesLock for "custom-flannel-750423"
	I1027 20:04:01.131933  479026 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-750423 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-750423 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 20:04:01.132007  479026 start.go:125] createHost starting for "" (driver="docker")
	I1027 20:03:59.915295  475934 node_ready.go:49] node "default-k8s-diff-port-073048" is "Ready"
	I1027 20:03:59.915322  475934 node_ready.go:38] duration metric: took 6.447641314s for node "default-k8s-diff-port-073048" to be "Ready" ...
	I1027 20:03:59.915335  475934 api_server.go:52] waiting for apiserver process to appear ...
	I1027 20:03:59.915391  475934 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 20:04:02.979561  475934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.551180389s)
	I1027 20:04:02.979646  475934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.507134932s)
	I1027 20:04:03.124756  475934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.173960698s)
	I1027 20:04:03.124794  475934 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.209386828s)
	I1027 20:04:03.124819  475934 api_server.go:72] duration metric: took 10.215957574s to wait for apiserver process to appear ...
	I1027 20:04:03.124825  475934 api_server.go:88] waiting for apiserver healthz status ...
	I1027 20:04:03.124903  475934 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1027 20:04:03.128284  475934 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-073048 addons enable metrics-server
	
	I1027 20:04:03.131474  475934 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1027 20:04:03.134475  475934 addons.go:514] duration metric: took 10.225259673s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1027 20:04:03.156605  475934 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 20:04:03.156636  475934 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 20:04:01.137166  479026 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1027 20:04:01.137486  479026 start.go:159] libmachine.API.Create for "custom-flannel-750423" (driver="docker")
	I1027 20:04:01.137543  479026 client.go:168] LocalClient.Create starting
	I1027 20:04:01.137624  479026 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem
	I1027 20:04:01.137669  479026 main.go:141] libmachine: Decoding PEM data...
	I1027 20:04:01.137699  479026 main.go:141] libmachine: Parsing certificate...
	I1027 20:04:01.137769  479026 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem
	I1027 20:04:01.137792  479026 main.go:141] libmachine: Decoding PEM data...
	I1027 20:04:01.137804  479026 main.go:141] libmachine: Parsing certificate...
	I1027 20:04:01.138202  479026 cli_runner.go:164] Run: docker network inspect custom-flannel-750423 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1027 20:04:01.166848  479026 cli_runner.go:211] docker network inspect custom-flannel-750423 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1027 20:04:01.166946  479026 network_create.go:284] running [docker network inspect custom-flannel-750423] to gather additional debugging logs...
	I1027 20:04:01.166963  479026 cli_runner.go:164] Run: docker network inspect custom-flannel-750423
	W1027 20:04:01.196382  479026 cli_runner.go:211] docker network inspect custom-flannel-750423 returned with exit code 1
	I1027 20:04:01.196450  479026 network_create.go:287] error running [docker network inspect custom-flannel-750423]: docker network inspect custom-flannel-750423: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-750423 not found
	I1027 20:04:01.196475  479026 network_create.go:289] output of [docker network inspect custom-flannel-750423]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-750423 not found
	
	** /stderr **
	I1027 20:04:01.196576  479026 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 20:04:01.232088  479026 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-74ee89127400 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:aa:c7:29:bd:7a:a9} reservation:<nil>}
	I1027 20:04:01.232543  479026 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-9c57ca829ac8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:0a:24:08:53:d6:07} reservation:<nil>}
	I1027 20:04:01.232892  479026 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b7a7c45fd176 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f6:52:e6:cd:c8:a6} reservation:<nil>}
	I1027 20:04:01.233394  479026 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019abb40}
	I1027 20:04:01.233421  479026 network_create.go:124] attempt to create docker network custom-flannel-750423 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1027 20:04:01.233490  479026 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-750423 custom-flannel-750423
	I1027 20:04:01.320633  479026 network_create.go:108] docker network custom-flannel-750423 192.168.76.0/24 created
	I1027 20:04:01.320668  479026 kic.go:121] calculated static IP "192.168.76.2" for the "custom-flannel-750423" container
	I1027 20:04:01.320741  479026 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1027 20:04:01.349042  479026 cli_runner.go:164] Run: docker volume create custom-flannel-750423 --label name.minikube.sigs.k8s.io=custom-flannel-750423 --label created_by.minikube.sigs.k8s.io=true
	I1027 20:04:01.379884  479026 oci.go:103] Successfully created a docker volume custom-flannel-750423
	I1027 20:04:01.379984  479026 cli_runner.go:164] Run: docker run --rm --name custom-flannel-750423-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-750423 --entrypoint /usr/bin/test -v custom-flannel-750423:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1027 20:04:02.079831  479026 oci.go:107] Successfully prepared a docker volume custom-flannel-750423
	I1027 20:04:02.079871  479026 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 20:04:02.079891  479026 kic.go:194] Starting extracting preloaded images to volume ...
	I1027 20:04:02.079971  479026 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v custom-flannel-750423:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1027 20:04:03.625517  475934 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1027 20:04:03.633831  475934 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1027 20:04:03.635036  475934 api_server.go:141] control plane version: v1.34.1
	I1027 20:04:03.635063  475934 api_server.go:131] duration metric: took 510.172833ms to wait for apiserver health ...
	I1027 20:04:03.635072  475934 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 20:04:03.639420  475934 system_pods.go:59] 8 kube-system pods found
	I1027 20:04:03.639499  475934 system_pods.go:61] "coredns-66bc5c9577-6vc9v" [5d420b85-b106-4d91-9ebd-483f8ccfa445] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:04:03.639525  475934 system_pods.go:61] "etcd-default-k8s-diff-port-073048" [13538216-46ea-4c98-a89b-a4d0be59d38b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 20:04:03.639572  475934 system_pods.go:61] "kindnet-qc8zw" [10158916-6994-4c41-ba7d-e5bd80a7fd56] Running
	I1027 20:04:03.639600  475934 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-073048" [7bb63b28-d55e-4f2f-bd52-0dbce0ee5858] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 20:04:03.639661  475934 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-073048" [502a218b-490e-47f8-b946-6d2df9e30913] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 20:04:03.639689  475934 system_pods.go:61] "kube-proxy-dsq46" [91a97ff3-0f9b-41c0-bbec-870515448861] Running
	I1027 20:04:03.639732  475934 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-073048" [b090cbea-1aa4-46ff-a144-7a93a8fdeece] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 20:04:03.639754  475934 system_pods.go:61] "storage-provisioner" [9fef6eaa-5369-4f2d-9dd1-f3d7074b9a77] Running
	I1027 20:04:03.639774  475934 system_pods.go:74] duration metric: took 4.696591ms to wait for pod list to return data ...
	I1027 20:04:03.639795  475934 default_sa.go:34] waiting for default service account to be created ...
	I1027 20:04:03.643039  475934 default_sa.go:45] found service account: "default"
	I1027 20:04:03.643061  475934 default_sa.go:55] duration metric: took 3.233436ms for default service account to be created ...
	I1027 20:04:03.643070  475934 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 20:04:03.647020  475934 system_pods.go:86] 8 kube-system pods found
	I1027 20:04:03.647104  475934 system_pods.go:89] "coredns-66bc5c9577-6vc9v" [5d420b85-b106-4d91-9ebd-483f8ccfa445] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:04:03.647130  475934 system_pods.go:89] "etcd-default-k8s-diff-port-073048" [13538216-46ea-4c98-a89b-a4d0be59d38b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 20:04:03.647169  475934 system_pods.go:89] "kindnet-qc8zw" [10158916-6994-4c41-ba7d-e5bd80a7fd56] Running
	I1027 20:04:03.647198  475934 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-073048" [7bb63b28-d55e-4f2f-bd52-0dbce0ee5858] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 20:04:03.647223  475934 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-073048" [502a218b-490e-47f8-b946-6d2df9e30913] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 20:04:03.647262  475934 system_pods.go:89] "kube-proxy-dsq46" [91a97ff3-0f9b-41c0-bbec-870515448861] Running
	I1027 20:04:03.647291  475934 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-073048" [b090cbea-1aa4-46ff-a144-7a93a8fdeece] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 20:04:03.647314  475934 system_pods.go:89] "storage-provisioner" [9fef6eaa-5369-4f2d-9dd1-f3d7074b9a77] Running
	I1027 20:04:03.647355  475934 system_pods.go:126] duration metric: took 4.278083ms to wait for k8s-apps to be running ...
	I1027 20:04:03.647383  475934 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 20:04:03.647477  475934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 20:04:03.664603  475934 system_svc.go:56] duration metric: took 17.211852ms WaitForService to wait for kubelet
	I1027 20:04:03.664677  475934 kubeadm.go:586] duration metric: took 10.755812811s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 20:04:03.664714  475934 node_conditions.go:102] verifying NodePressure condition ...
	I1027 20:04:03.668523  475934 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 20:04:03.668603  475934 node_conditions.go:123] node cpu capacity is 2
	I1027 20:04:03.668633  475934 node_conditions.go:105] duration metric: took 3.877906ms to run NodePressure ...
	I1027 20:04:03.668676  475934 start.go:241] waiting for startup goroutines ...
	I1027 20:04:03.668701  475934 start.go:246] waiting for cluster config update ...
	I1027 20:04:03.668726  475934 start.go:255] writing updated cluster config ...
	I1027 20:04:03.669080  475934 ssh_runner.go:195] Run: rm -f paused
	I1027 20:04:03.674273  475934 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 20:04:03.682760  475934 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6vc9v" in "kube-system" namespace to be "Ready" or be gone ...
	W1027 20:04:05.720117  475934 pod_ready.go:104] pod "coredns-66bc5c9577-6vc9v" is not "Ready", error: <nil>
	W1027 20:04:08.191255  475934 pod_ready.go:104] pod "coredns-66bc5c9577-6vc9v" is not "Ready", error: <nil>
	I1027 20:04:06.938134  479026 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v custom-flannel-750423:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.858113514s)
	I1027 20:04:06.938164  479026 kic.go:203] duration metric: took 4.858269325s to extract preloaded images to volume ...
	W1027 20:04:06.938304  479026 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1027 20:04:06.938424  479026 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1027 20:04:07.026603  479026 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-flannel-750423 --name custom-flannel-750423 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-750423 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-flannel-750423 --network custom-flannel-750423 --ip 192.168.76.2 --volume custom-flannel-750423:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1027 20:04:07.428705  479026 cli_runner.go:164] Run: docker container inspect custom-flannel-750423 --format={{.State.Running}}
	I1027 20:04:07.453832  479026 cli_runner.go:164] Run: docker container inspect custom-flannel-750423 --format={{.State.Status}}
	I1027 20:04:07.482998  479026 cli_runner.go:164] Run: docker exec custom-flannel-750423 stat /var/lib/dpkg/alternatives/iptables
	I1027 20:04:07.546856  479026 oci.go:144] the created container "custom-flannel-750423" has a running status.
	I1027 20:04:07.546899  479026 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21801-266035/.minikube/machines/custom-flannel-750423/id_rsa...
	I1027 20:04:08.570855  479026 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21801-266035/.minikube/machines/custom-flannel-750423/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1027 20:04:08.599325  479026 cli_runner.go:164] Run: docker container inspect custom-flannel-750423 --format={{.State.Status}}
	I1027 20:04:08.622091  479026 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1027 20:04:08.622115  479026 kic_runner.go:114] Args: [docker exec --privileged custom-flannel-750423 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1027 20:04:08.683808  479026 cli_runner.go:164] Run: docker container inspect custom-flannel-750423 --format={{.State.Status}}
	I1027 20:04:08.704416  479026 machine.go:93] provisionDockerMachine start ...
	I1027 20:04:08.704522  479026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-750423
	I1027 20:04:08.724931  479026 main.go:141] libmachine: Using SSH client type: native
	I1027 20:04:08.725281  479026 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1027 20:04:08.725298  479026 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 20:04:08.725996  479026 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37348->127.0.0.1:33458: read: connection reset by peer
	W1027 20:04:10.688836  475934 pod_ready.go:104] pod "coredns-66bc5c9577-6vc9v" is not "Ready", error: <nil>
	W1027 20:04:12.692747  475934 pod_ready.go:104] pod "coredns-66bc5c9577-6vc9v" is not "Ready", error: <nil>
	I1027 20:04:11.891323  479026 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-750423
	
	I1027 20:04:11.891399  479026 ubuntu.go:182] provisioning hostname "custom-flannel-750423"
	I1027 20:04:11.891498  479026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-750423
	I1027 20:04:11.914514  479026 main.go:141] libmachine: Using SSH client type: native
	I1027 20:04:11.914831  479026 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1027 20:04:11.914847  479026 main.go:141] libmachine: About to run SSH command:
	sudo hostname custom-flannel-750423 && echo "custom-flannel-750423" | sudo tee /etc/hostname
	I1027 20:04:12.099283  479026 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-750423
	
	I1027 20:04:12.099367  479026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-750423
	I1027 20:04:12.125267  479026 main.go:141] libmachine: Using SSH client type: native
	I1027 20:04:12.125579  479026 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1027 20:04:12.125613  479026 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-750423' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-750423/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-750423' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 20:04:12.299770  479026 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 20:04:12.299884  479026 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21801-266035/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-266035/.minikube}
	I1027 20:04:12.299919  479026 ubuntu.go:190] setting up certificates
	I1027 20:04:12.299966  479026 provision.go:84] configureAuth start
	I1027 20:04:12.300068  479026 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-750423
	I1027 20:04:12.330811  479026 provision.go:143] copyHostCerts
	I1027 20:04:12.330889  479026 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem, removing ...
	I1027 20:04:12.330900  479026 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem
	I1027 20:04:12.330975  479026 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem (1078 bytes)
	I1027 20:04:12.331120  479026 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem, removing ...
	I1027 20:04:12.331128  479026 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem
	I1027 20:04:12.331159  479026 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem (1123 bytes)
	I1027 20:04:12.331226  479026 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem, removing ...
	I1027 20:04:12.331231  479026 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem
	I1027 20:04:12.331254  479026 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem (1675 bytes)
	I1027 20:04:12.331313  479026 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-750423 san=[127.0.0.1 192.168.76.2 custom-flannel-750423 localhost minikube]
	I1027 20:04:12.669150  479026 provision.go:177] copyRemoteCerts
	I1027 20:04:12.669264  479026 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 20:04:12.669348  479026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-750423
	I1027 20:04:12.694304  479026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/custom-flannel-750423/id_rsa Username:docker}
	I1027 20:04:12.805644  479026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1027 20:04:12.831559  479026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 20:04:12.849849  479026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 20:04:12.875785  479026 provision.go:87] duration metric: took 575.786923ms to configureAuth
	I1027 20:04:12.875812  479026 ubuntu.go:206] setting minikube options for container-runtime
	I1027 20:04:12.875995  479026 config.go:182] Loaded profile config "custom-flannel-750423": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:04:12.876104  479026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-750423
	I1027 20:04:12.894371  479026 main.go:141] libmachine: Using SSH client type: native
	I1027 20:04:12.894689  479026 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1027 20:04:12.894709  479026 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 20:04:13.205523  479026 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 20:04:13.205548  479026 machine.go:96] duration metric: took 4.501107167s to provisionDockerMachine
	I1027 20:04:13.205560  479026 client.go:171] duration metric: took 12.068004575s to LocalClient.Create
	I1027 20:04:13.205587  479026 start.go:167] duration metric: took 12.06810452s to libmachine.API.Create "custom-flannel-750423"
	I1027 20:04:13.205598  479026 start.go:293] postStartSetup for "custom-flannel-750423" (driver="docker")
	I1027 20:04:13.205608  479026 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 20:04:13.205681  479026 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 20:04:13.205733  479026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-750423
	I1027 20:04:13.241799  479026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/custom-flannel-750423/id_rsa Username:docker}
	I1027 20:04:13.364907  479026 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 20:04:13.368686  479026 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 20:04:13.368717  479026 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 20:04:13.368729  479026 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-266035/.minikube/addons for local assets ...
	I1027 20:04:13.368781  479026 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-266035/.minikube/files for local assets ...
	I1027 20:04:13.368868  479026 filesync.go:149] local asset: /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem -> 2678802.pem in /etc/ssl/certs
	I1027 20:04:13.368977  479026 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 20:04:13.381033  479026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem --> /etc/ssl/certs/2678802.pem (1708 bytes)
	I1027 20:04:13.407552  479026 start.go:296] duration metric: took 201.937551ms for postStartSetup
	I1027 20:04:13.407933  479026 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-750423
	I1027 20:04:13.430560  479026 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/config.json ...
	I1027 20:04:13.430836  479026 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 20:04:13.430887  479026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-750423
	I1027 20:04:13.460485  479026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/custom-flannel-750423/id_rsa Username:docker}
	I1027 20:04:13.568393  479026 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 20:04:13.574162  479026 start.go:128] duration metric: took 12.442138288s to createHost
	I1027 20:04:13.574190  479026 start.go:83] releasing machines lock for "custom-flannel-750423", held for 12.442275253s
	I1027 20:04:13.574274  479026 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-750423
	I1027 20:04:13.596934  479026 ssh_runner.go:195] Run: cat /version.json
	I1027 20:04:13.596984  479026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-750423
	I1027 20:04:13.597219  479026 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 20:04:13.597284  479026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-750423
	I1027 20:04:13.630603  479026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/custom-flannel-750423/id_rsa Username:docker}
	I1027 20:04:13.642368  479026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/custom-flannel-750423/id_rsa Username:docker}
	I1027 20:04:13.895968  479026 ssh_runner.go:195] Run: systemctl --version
	I1027 20:04:13.903219  479026 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 20:04:13.952373  479026 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 20:04:13.957239  479026 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 20:04:13.957315  479026 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 20:04:13.993426  479026 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1027 20:04:13.993460  479026 start.go:495] detecting cgroup driver to use...
	I1027 20:04:13.993492  479026 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1027 20:04:13.993546  479026 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 20:04:14.023791  479026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 20:04:14.048887  479026 docker.go:218] disabling cri-docker service (if available) ...
	I1027 20:04:14.048957  479026 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 20:04:14.072231  479026 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 20:04:14.098010  479026 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 20:04:14.256873  479026 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 20:04:14.432896  479026 docker.go:234] disabling docker service ...
	I1027 20:04:14.432966  479026 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 20:04:14.480119  479026 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 20:04:14.494745  479026 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 20:04:14.642656  479026 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 20:04:14.787416  479026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 20:04:14.801974  479026 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 20:04:14.818612  479026 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 20:04:14.818723  479026 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:04:14.833820  479026 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 20:04:14.833901  479026 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:04:14.845827  479026 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:04:14.857668  479026 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:04:14.866343  479026 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 20:04:14.874699  479026 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:04:14.883844  479026 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:04:14.902228  479026 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:04:14.915376  479026 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 20:04:14.928679  479026 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 20:04:14.938997  479026 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:04:15.113477  479026 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 20:04:15.708782  479026 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 20:04:15.708848  479026 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 20:04:15.714067  479026 start.go:563] Will wait 60s for crictl version
	I1027 20:04:15.714188  479026 ssh_runner.go:195] Run: which crictl
	I1027 20:04:15.718595  479026 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 20:04:15.747947  479026 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 20:04:15.748110  479026 ssh_runner.go:195] Run: crio --version
	I1027 20:04:15.782345  479026 ssh_runner.go:195] Run: crio --version
	I1027 20:04:15.817705  479026 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1027 20:04:14.697515  475934 pod_ready.go:104] pod "coredns-66bc5c9577-6vc9v" is not "Ready", error: <nil>
	W1027 20:04:17.200265  475934 pod_ready.go:104] pod "coredns-66bc5c9577-6vc9v" is not "Ready", error: <nil>
	I1027 20:04:15.820891  479026 cli_runner.go:164] Run: docker network inspect custom-flannel-750423 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 20:04:15.838181  479026 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1027 20:04:15.842641  479026 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 20:04:15.853159  479026 kubeadm.go:883] updating cluster {Name:custom-flannel-750423 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-750423 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 20:04:15.853271  479026 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 20:04:15.853330  479026 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 20:04:15.897535  479026 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 20:04:15.897556  479026 crio.go:433] Images already preloaded, skipping extraction
	I1027 20:04:15.897611  479026 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 20:04:15.932838  479026 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 20:04:15.932910  479026 cache_images.go:85] Images are preloaded, skipping loading
	I1027 20:04:15.932934  479026 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1027 20:04:15.933068  479026 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=custom-flannel-750423 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-750423 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I1027 20:04:15.933190  479026 ssh_runner.go:195] Run: crio config
	I1027 20:04:16.022189  479026 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1027 20:04:16.022282  479026 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 20:04:16.022345  479026 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-750423 NodeName:custom-flannel-750423 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 20:04:16.023772  479026 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-750423"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 20:04:16.023928  479026 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 20:04:16.039956  479026 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 20:04:16.040119  479026 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 20:04:16.056369  479026 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	I1027 20:04:16.073205  479026 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 20:04:16.088642  479026 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1027 20:04:16.103722  479026 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1027 20:04:16.110050  479026 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 20:04:16.119843  479026 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:04:16.284666  479026 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 20:04:16.312554  479026 certs.go:69] Setting up /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423 for IP: 192.168.76.2
	I1027 20:04:16.312626  479026 certs.go:195] generating shared ca certs ...
	I1027 20:04:16.312658  479026 certs.go:227] acquiring lock for ca certs: {Name:mk172548fc54811b51040cc0201d01eb4b3dd19b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:04:16.312844  479026 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21801-266035/.minikube/ca.key
	I1027 20:04:16.312952  479026 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.key
	I1027 20:04:16.312981  479026 certs.go:257] generating profile certs ...
	I1027 20:04:16.313082  479026 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/client.key
	I1027 20:04:16.313125  479026 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/client.crt with IP's: []
	I1027 20:04:16.644509  479026 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/client.crt ...
	I1027 20:04:16.644544  479026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/client.crt: {Name:mkb007552fda2a65d09cfbc07999f44d0ad5077f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:04:16.644730  479026 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/client.key ...
	I1027 20:04:16.644747  479026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/client.key: {Name:mk63a68bf534952e069dfe2c5a68b0e310658e3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:04:16.644844  479026 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/apiserver.key.ba20c70b
	I1027 20:04:16.644865  479026 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/apiserver.crt.ba20c70b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1027 20:04:17.686240  479026 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/apiserver.crt.ba20c70b ...
	I1027 20:04:17.686270  479026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/apiserver.crt.ba20c70b: {Name:mkc6d49782361a56a1b9e35dd88f3f3970d27216 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:04:17.686486  479026 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/apiserver.key.ba20c70b ...
	I1027 20:04:17.686500  479026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/apiserver.key.ba20c70b: {Name:mkc49943e9ca4ff6f91d7e0e72a8b7d9fb0f74fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:04:17.686608  479026 certs.go:382] copying /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/apiserver.crt.ba20c70b -> /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/apiserver.crt
	I1027 20:04:17.686695  479026 certs.go:386] copying /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/apiserver.key.ba20c70b -> /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/apiserver.key
	I1027 20:04:17.686754  479026 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/proxy-client.key
	I1027 20:04:17.686769  479026 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/proxy-client.crt with IP's: []
	I1027 20:04:18.581947  479026 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/proxy-client.crt ...
	I1027 20:04:18.581979  479026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/proxy-client.crt: {Name:mk3d4676be849e320b54abf1ba61340565c5056c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:04:18.582189  479026 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/proxy-client.key ...
	I1027 20:04:18.582204  479026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/proxy-client.key: {Name:mk1326034f195b9e342459aa63b1b2b929d6a345 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:04:18.582398  479026 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880.pem (1338 bytes)
	W1027 20:04:18.582439  479026 certs.go:480] ignoring /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880_empty.pem, impossibly tiny 0 bytes
	I1027 20:04:18.582453  479026 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 20:04:18.582477  479026 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem (1078 bytes)
	I1027 20:04:18.582507  479026 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem (1123 bytes)
	I1027 20:04:18.582527  479026 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem (1675 bytes)
	I1027 20:04:18.582569  479026 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem (1708 bytes)
	I1027 20:04:18.583213  479026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 20:04:18.602922  479026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 20:04:18.625071  479026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 20:04:18.652190  479026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 20:04:18.671796  479026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1027 20:04:18.697537  479026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 20:04:18.715681  479026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 20:04:18.737252  479026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1027 20:04:18.762574  479026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem --> /usr/share/ca-certificates/2678802.pem (1708 bytes)
	I1027 20:04:18.787355  479026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 20:04:18.824268  479026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880.pem --> /usr/share/ca-certificates/267880.pem (1338 bytes)
	I1027 20:04:18.856935  479026 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 20:04:18.872957  479026 ssh_runner.go:195] Run: openssl version
	I1027 20:04:18.887376  479026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 20:04:18.897641  479026 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:04:18.905709  479026 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:04:18.905776  479026 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:04:18.954483  479026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 20:04:18.963607  479026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/267880.pem && ln -fs /usr/share/ca-certificates/267880.pem /etc/ssl/certs/267880.pem"
	I1027 20:04:18.972591  479026 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/267880.pem
	I1027 20:04:18.976576  479026 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 19:04 /usr/share/ca-certificates/267880.pem
	I1027 20:04:18.976637  479026 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/267880.pem
	I1027 20:04:19.023904  479026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/267880.pem /etc/ssl/certs/51391683.0"
	I1027 20:04:19.034056  479026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2678802.pem && ln -fs /usr/share/ca-certificates/2678802.pem /etc/ssl/certs/2678802.pem"
	I1027 20:04:19.045433  479026 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2678802.pem
	I1027 20:04:19.049728  479026 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 19:04 /usr/share/ca-certificates/2678802.pem
	I1027 20:04:19.049886  479026 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2678802.pem
	I1027 20:04:19.098688  479026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2678802.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 20:04:19.106935  479026 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 20:04:19.110543  479026 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 20:04:19.110607  479026 kubeadm.go:400] StartCluster: {Name:custom-flannel-750423 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-750423 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSL
og:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 20:04:19.110683  479026 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 20:04:19.110739  479026 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 20:04:19.141885  479026 cri.go:89] found id: ""
	I1027 20:04:19.141963  479026 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 20:04:19.149841  479026 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 20:04:19.158027  479026 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1027 20:04:19.158095  479026 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 20:04:19.166030  479026 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 20:04:19.166061  479026 kubeadm.go:157] found existing configuration files:
	
	I1027 20:04:19.166114  479026 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 20:04:19.173934  479026 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 20:04:19.174050  479026 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 20:04:19.181583  479026 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 20:04:19.190781  479026 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 20:04:19.190847  479026 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 20:04:19.198374  479026 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 20:04:19.206361  479026 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 20:04:19.206428  479026 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 20:04:19.214236  479026 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 20:04:19.222507  479026 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 20:04:19.222651  479026 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 20:04:19.233722  479026 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 20:04:19.296515  479026 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1027 20:04:19.296793  479026 kubeadm.go:318] [preflight] Running pre-flight checks
	I1027 20:04:19.336339  479026 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1027 20:04:19.336499  479026 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1027 20:04:19.336599  479026 kubeadm.go:318] OS: Linux
	I1027 20:04:19.336672  479026 kubeadm.go:318] CGROUPS_CPU: enabled
	I1027 20:04:19.336753  479026 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1027 20:04:19.336828  479026 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1027 20:04:19.336915  479026 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1027 20:04:19.336992  479026 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1027 20:04:19.337076  479026 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1027 20:04:19.337152  479026 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1027 20:04:19.337237  479026 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1027 20:04:19.337323  479026 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1027 20:04:19.406323  479026 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 20:04:19.406517  479026 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 20:04:19.406675  479026 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 20:04:19.414695  479026 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 20:04:19.419948  479026 out.go:252]   - Generating certificates and keys ...
	I1027 20:04:19.420115  479026 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1027 20:04:19.420229  479026 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	W1027 20:04:19.689626  475934 pod_ready.go:104] pod "coredns-66bc5c9577-6vc9v" is not "Ready", error: <nil>
	W1027 20:04:22.189874  475934 pod_ready.go:104] pod "coredns-66bc5c9577-6vc9v" is not "Ready", error: <nil>
	I1027 20:04:20.787921  479026 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 20:04:21.732418  479026 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1027 20:04:21.912671  479026 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1027 20:04:22.463443  479026 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1027 20:04:22.819798  479026 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1027 20:04:22.820131  479026 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-750423 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1027 20:04:23.104941  479026 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1027 20:04:23.105263  479026 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-750423 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1027 20:04:23.789703  479026 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 20:04:24.091142  479026 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 20:04:25.068370  479026 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1027 20:04:25.069418  479026 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 20:04:25.123816  479026 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 20:04:25.609692  479026 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 20:04:25.912002  479026 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 20:04:26.176111  479026 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 20:04:26.549009  479026 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 20:04:26.549578  479026 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 20:04:26.552811  479026 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1027 20:04:24.201476  475934 pod_ready.go:104] pod "coredns-66bc5c9577-6vc9v" is not "Ready", error: <nil>
	W1027 20:04:26.690337  475934 pod_ready.go:104] pod "coredns-66bc5c9577-6vc9v" is not "Ready", error: <nil>
	I1027 20:04:26.556203  479026 out.go:252]   - Booting up control plane ...
	I1027 20:04:26.556313  479026 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 20:04:26.556395  479026 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 20:04:26.556483  479026 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 20:04:26.573404  479026 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 20:04:26.573819  479026 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 20:04:26.582829  479026 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 20:04:26.583241  479026 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 20:04:26.583301  479026 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1027 20:04:26.713124  479026 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 20:04:26.713248  479026 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 20:04:28.214394  479026 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501573003s
	I1027 20:04:28.218580  479026 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 20:04:28.218679  479026 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1027 20:04:28.219001  479026 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 20:04:28.219091  479026 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1027 20:04:28.690520  475934 pod_ready.go:104] pod "coredns-66bc5c9577-6vc9v" is not "Ready", error: <nil>
	W1027 20:04:31.188383  475934 pod_ready.go:104] pod "coredns-66bc5c9577-6vc9v" is not "Ready", error: <nil>
	W1027 20:04:33.188535  475934 pod_ready.go:104] pod "coredns-66bc5c9577-6vc9v" is not "Ready", error: <nil>
	I1027 20:04:31.739854  479026 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.520902586s
	I1027 20:04:32.987681  479026 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.76905092s
	I1027 20:04:34.720927  479026 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.50223854s
	I1027 20:04:34.740659  479026 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 20:04:34.755673  479026 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 20:04:34.769595  479026 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 20:04:34.769807  479026 kubeadm.go:318] [mark-control-plane] Marking the node custom-flannel-750423 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 20:04:34.785508  479026 kubeadm.go:318] [bootstrap-token] Using token: n0aaew.wh2sltsd3ngbl12t
	I1027 20:04:34.788823  479026 out.go:252]   - Configuring RBAC rules ...
	I1027 20:04:34.788949  479026 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 20:04:34.793160  479026 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 20:04:34.802935  479026 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 20:04:34.807430  479026 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 20:04:34.815913  479026 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 20:04:34.820870  479026 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 20:04:35.127902  479026 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 20:04:35.600296  479026 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1027 20:04:36.128162  479026 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1027 20:04:36.129263  479026 kubeadm.go:318] 
	I1027 20:04:36.129348  479026 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1027 20:04:36.129359  479026 kubeadm.go:318] 
	I1027 20:04:36.129436  479026 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1027 20:04:36.129452  479026 kubeadm.go:318] 
	I1027 20:04:36.129478  479026 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1027 20:04:36.129539  479026 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 20:04:36.129593  479026 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 20:04:36.129601  479026 kubeadm.go:318] 
	I1027 20:04:36.129654  479026 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1027 20:04:36.129662  479026 kubeadm.go:318] 
	I1027 20:04:36.129709  479026 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 20:04:36.129717  479026 kubeadm.go:318] 
	I1027 20:04:36.129768  479026 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1027 20:04:36.129847  479026 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 20:04:36.129917  479026 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 20:04:36.129925  479026 kubeadm.go:318] 
	I1027 20:04:36.130008  479026 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 20:04:36.130093  479026 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1027 20:04:36.130100  479026 kubeadm.go:318] 
	I1027 20:04:36.130183  479026 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token n0aaew.wh2sltsd3ngbl12t \
	I1027 20:04:36.130288  479026 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:9473f077605f6866eb08c8a712af7c56a0deb8dc2a907ec8cbb4b8ae396e8f1f \
	I1027 20:04:36.130312  479026 kubeadm.go:318] 	--control-plane 
	I1027 20:04:36.130320  479026 kubeadm.go:318] 
	I1027 20:04:36.130403  479026 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1027 20:04:36.130411  479026 kubeadm.go:318] 
	I1027 20:04:36.130492  479026 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token n0aaew.wh2sltsd3ngbl12t \
	I1027 20:04:36.130596  479026 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:9473f077605f6866eb08c8a712af7c56a0deb8dc2a907ec8cbb4b8ae396e8f1f 
	I1027 20:04:36.135190  479026 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1027 20:04:36.135434  479026 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1027 20:04:36.135567  479026 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 20:04:36.135599  479026 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1027 20:04:36.138719  479026 out.go:179] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	W1027 20:04:35.189973  475934 pod_ready.go:104] pod "coredns-66bc5c9577-6vc9v" is not "Ready", error: <nil>
	W1027 20:04:37.687827  475934 pod_ready.go:104] pod "coredns-66bc5c9577-6vc9v" is not "Ready", error: <nil>
	I1027 20:04:36.141499  479026 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 20:04:36.141583  479026 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I1027 20:04:36.146228  479026 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I1027 20:04:36.146257  479026 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4591 bytes)
	I1027 20:04:36.166610  479026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 20:04:36.606543  479026 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 20:04:36.606768  479026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:04:36.606908  479026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-750423 minikube.k8s.io/updated_at=2025_10_27T20_04_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f minikube.k8s.io/name=custom-flannel-750423 minikube.k8s.io/primary=true
	I1027 20:04:36.809856  479026 ops.go:34] apiserver oom_adj: -16
	I1027 20:04:36.809877  479026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:04:37.311034  479026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:04:37.810502  479026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:04:38.310309  479026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:04:38.810443  479026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:04:39.310840  479026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:04:39.810936  479026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:04:40.310724  479026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:04:40.810843  479026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:04:40.920279  479026 kubeadm.go:1113] duration metric: took 4.313566767s to wait for elevateKubeSystemPrivileges
	I1027 20:04:40.920314  479026 kubeadm.go:402] duration metric: took 21.809710962s to StartCluster
	I1027 20:04:40.920332  479026 settings.go:142] acquiring lock: {Name:mk49b9e2e9b7901c6b51e89b8b1e8ee7f9e88107 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:04:40.920403  479026 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 20:04:40.921362  479026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/kubeconfig: {Name:mk0a03a594f8d9f9199b1c7cff7c550cec8414c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:04:40.921585  479026 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 20:04:40.921685  479026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 20:04:40.921932  479026 config.go:182] Loaded profile config "custom-flannel-750423": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:04:40.921973  479026 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 20:04:40.922035  479026 addons.go:69] Setting storage-provisioner=true in profile "custom-flannel-750423"
	I1027 20:04:40.922054  479026 addons.go:238] Setting addon storage-provisioner=true in "custom-flannel-750423"
	I1027 20:04:40.922078  479026 host.go:66] Checking if "custom-flannel-750423" exists ...
	I1027 20:04:40.922556  479026 cli_runner.go:164] Run: docker container inspect custom-flannel-750423 --format={{.State.Status}}
	I1027 20:04:40.923175  479026 addons.go:69] Setting default-storageclass=true in profile "custom-flannel-750423"
	I1027 20:04:40.923200  479026 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-750423"
	I1027 20:04:40.923466  479026 cli_runner.go:164] Run: docker container inspect custom-flannel-750423 --format={{.State.Status}}
	I1027 20:04:40.925031  479026 out.go:179] * Verifying Kubernetes components...
	I1027 20:04:40.929097  479026 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:04:40.966478  479026 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 20:04:40.970178  479026 addons.go:238] Setting addon default-storageclass=true in "custom-flannel-750423"
	I1027 20:04:40.970217  479026 host.go:66] Checking if "custom-flannel-750423" exists ...
	I1027 20:04:40.970628  479026 cli_runner.go:164] Run: docker container inspect custom-flannel-750423 --format={{.State.Status}}
	I1027 20:04:40.971142  479026 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 20:04:40.971166  479026 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 20:04:40.971222  479026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-750423
	I1027 20:04:41.000996  479026 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 20:04:41.001022  479026 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 20:04:41.001098  479026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-750423
	I1027 20:04:41.002021  479026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/custom-flannel-750423/id_rsa Username:docker}
	I1027 20:04:41.041739  479026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/custom-flannel-750423/id_rsa Username:docker}
	I1027 20:04:41.288002  479026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 20:04:41.294781  479026 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 20:04:41.301510  479026 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 20:04:41.335133  479026 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 20:04:41.947628  479026 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1027 20:04:42.351323  479026 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.056458525s)
	I1027 20:04:42.351387  479026 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.049804894s)
	I1027 20:04:42.351613  479026 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.016413939s)
	I1027 20:04:42.352511  479026 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-750423" to be "Ready" ...
	I1027 20:04:42.376777  479026 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1027 20:04:39.689758  475934 pod_ready.go:104] pod "coredns-66bc5c9577-6vc9v" is not "Ready", error: <nil>
	W1027 20:04:42.191594  475934 pod_ready.go:104] pod "coredns-66bc5c9577-6vc9v" is not "Ready", error: <nil>
	I1027 20:04:44.209570  475934 pod_ready.go:94] pod "coredns-66bc5c9577-6vc9v" is "Ready"
	I1027 20:04:44.209657  475934 pod_ready.go:86] duration metric: took 40.526824135s for pod "coredns-66bc5c9577-6vc9v" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:04:44.216845  475934 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-073048" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:04:44.221259  475934 pod_ready.go:94] pod "etcd-default-k8s-diff-port-073048" is "Ready"
	I1027 20:04:44.221283  475934 pod_ready.go:86] duration metric: took 4.414219ms for pod "etcd-default-k8s-diff-port-073048" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:04:44.225262  475934 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-073048" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:04:44.233009  475934 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-073048" is "Ready"
	I1027 20:04:44.233084  475934 pod_ready.go:86] duration metric: took 7.796566ms for pod "kube-apiserver-default-k8s-diff-port-073048" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:04:44.236071  475934 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-073048" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:04:44.387378  475934 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-073048" is "Ready"
	I1027 20:04:44.387454  475934 pod_ready.go:86] duration metric: took 151.350232ms for pod "kube-controller-manager-default-k8s-diff-port-073048" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:04:44.587876  475934 pod_ready.go:83] waiting for pod "kube-proxy-dsq46" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:04:44.987159  475934 pod_ready.go:94] pod "kube-proxy-dsq46" is "Ready"
	I1027 20:04:44.987197  475934 pod_ready.go:86] duration metric: took 399.247352ms for pod "kube-proxy-dsq46" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:04:45.188987  475934 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-073048" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:04:45.587506  475934 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-073048" is "Ready"
	I1027 20:04:45.587539  475934 pod_ready.go:86] duration metric: took 398.521522ms for pod "kube-scheduler-default-k8s-diff-port-073048" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:04:45.587552  475934 pod_ready.go:40] duration metric: took 41.913192904s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 20:04:45.686793  475934 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 20:04:45.696703  475934 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-073048" cluster and "default" namespace by default
	I1027 20:04:42.380078  479026 addons.go:514] duration metric: took 1.458081161s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1027 20:04:42.451810  479026 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-750423" context rescaled to 1 replicas
	W1027 20:04:44.355985  479026 node_ready.go:57] node "custom-flannel-750423" has "Ready":"False" status (will retry)
	W1027 20:04:46.360558  479026 node_ready.go:57] node "custom-flannel-750423" has "Ready":"False" status (will retry)
	I1027 20:04:46.855237  479026 node_ready.go:49] node "custom-flannel-750423" is "Ready"
	I1027 20:04:46.855282  479026 node_ready.go:38] duration metric: took 4.502733857s for node "custom-flannel-750423" to be "Ready" ...
	I1027 20:04:46.855295  479026 api_server.go:52] waiting for apiserver process to appear ...
	I1027 20:04:46.855361  479026 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 20:04:46.868611  479026 api_server.go:72] duration metric: took 5.946990657s to wait for apiserver process to appear ...
	I1027 20:04:46.868632  479026 api_server.go:88] waiting for apiserver healthz status ...
	I1027 20:04:46.868651  479026 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 20:04:46.876805  479026 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1027 20:04:46.877821  479026 api_server.go:141] control plane version: v1.34.1
	I1027 20:04:46.877846  479026 api_server.go:131] duration metric: took 9.207357ms to wait for apiserver health ...
	I1027 20:04:46.877855  479026 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 20:04:46.880937  479026 system_pods.go:59] 7 kube-system pods found
	I1027 20:04:46.880968  479026 system_pods.go:61] "coredns-66bc5c9577-kljgr" [4fb7da7d-dbef-4a92-b6f8-ec55ae59a2c6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:04:46.880976  479026 system_pods.go:61] "etcd-custom-flannel-750423" [c419de1f-46cc-4907-be43-19a5e64391a6] Running
	I1027 20:04:46.880982  479026 system_pods.go:61] "kube-apiserver-custom-flannel-750423" [de4fdabb-a7f5-478f-9df2-69404d40c2a8] Running
	I1027 20:04:46.880994  479026 system_pods.go:61] "kube-controller-manager-custom-flannel-750423" [66075651-08ba-4a35-b9c2-5e7d904dbbc9] Running
	I1027 20:04:46.881004  479026 system_pods.go:61] "kube-proxy-twn9b" [08589bb9-6320-42da-ad26-0245d9f69104] Running
	I1027 20:04:46.881008  479026 system_pods.go:61] "kube-scheduler-custom-flannel-750423" [7d421f7e-f3ce-4bf1-a88e-614ed0ea606e] Running
	I1027 20:04:46.881017  479026 system_pods.go:61] "storage-provisioner" [23ed61f3-fc3a-43d5-b5d0-c6226de985ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 20:04:46.881023  479026 system_pods.go:74] duration metric: took 3.163317ms to wait for pod list to return data ...
	I1027 20:04:46.881035  479026 default_sa.go:34] waiting for default service account to be created ...
	I1027 20:04:46.883550  479026 default_sa.go:45] found service account: "default"
	I1027 20:04:46.883573  479026 default_sa.go:55] duration metric: took 2.532122ms for default service account to be created ...
	I1027 20:04:46.883581  479026 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 20:04:46.885836  479026 system_pods.go:86] 7 kube-system pods found
	I1027 20:04:46.885867  479026 system_pods.go:89] "coredns-66bc5c9577-kljgr" [4fb7da7d-dbef-4a92-b6f8-ec55ae59a2c6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:04:46.885873  479026 system_pods.go:89] "etcd-custom-flannel-750423" [c419de1f-46cc-4907-be43-19a5e64391a6] Running
	I1027 20:04:46.885879  479026 system_pods.go:89] "kube-apiserver-custom-flannel-750423" [de4fdabb-a7f5-478f-9df2-69404d40c2a8] Running
	I1027 20:04:46.885884  479026 system_pods.go:89] "kube-controller-manager-custom-flannel-750423" [66075651-08ba-4a35-b9c2-5e7d904dbbc9] Running
	I1027 20:04:46.885888  479026 system_pods.go:89] "kube-proxy-twn9b" [08589bb9-6320-42da-ad26-0245d9f69104] Running
	I1027 20:04:46.885923  479026 system_pods.go:89] "kube-scheduler-custom-flannel-750423" [7d421f7e-f3ce-4bf1-a88e-614ed0ea606e] Running
	I1027 20:04:46.885935  479026 system_pods.go:89] "storage-provisioner" [23ed61f3-fc3a-43d5-b5d0-c6226de985ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 20:04:46.885956  479026 retry.go:31] will retry after 296.925999ms: missing components: kube-dns
	I1027 20:04:47.190874  479026 system_pods.go:86] 7 kube-system pods found
	I1027 20:04:47.190976  479026 system_pods.go:89] "coredns-66bc5c9577-kljgr" [4fb7da7d-dbef-4a92-b6f8-ec55ae59a2c6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:04:47.191051  479026 system_pods.go:89] "etcd-custom-flannel-750423" [c419de1f-46cc-4907-be43-19a5e64391a6] Running
	I1027 20:04:47.191083  479026 system_pods.go:89] "kube-apiserver-custom-flannel-750423" [de4fdabb-a7f5-478f-9df2-69404d40c2a8] Running
	I1027 20:04:47.191127  479026 system_pods.go:89] "kube-controller-manager-custom-flannel-750423" [66075651-08ba-4a35-b9c2-5e7d904dbbc9] Running
	I1027 20:04:47.191161  479026 system_pods.go:89] "kube-proxy-twn9b" [08589bb9-6320-42da-ad26-0245d9f69104] Running
	I1027 20:04:47.191199  479026 system_pods.go:89] "kube-scheduler-custom-flannel-750423" [7d421f7e-f3ce-4bf1-a88e-614ed0ea606e] Running
	I1027 20:04:47.191232  479026 system_pods.go:89] "storage-provisioner" [23ed61f3-fc3a-43d5-b5d0-c6226de985ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 20:04:47.191281  479026 retry.go:31] will retry after 357.971788ms: missing components: kube-dns
	I1027 20:04:47.553025  479026 system_pods.go:86] 7 kube-system pods found
	I1027 20:04:47.553067  479026 system_pods.go:89] "coredns-66bc5c9577-kljgr" [4fb7da7d-dbef-4a92-b6f8-ec55ae59a2c6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:04:47.553074  479026 system_pods.go:89] "etcd-custom-flannel-750423" [c419de1f-46cc-4907-be43-19a5e64391a6] Running
	I1027 20:04:47.553082  479026 system_pods.go:89] "kube-apiserver-custom-flannel-750423" [de4fdabb-a7f5-478f-9df2-69404d40c2a8] Running
	I1027 20:04:47.553088  479026 system_pods.go:89] "kube-controller-manager-custom-flannel-750423" [66075651-08ba-4a35-b9c2-5e7d904dbbc9] Running
	I1027 20:04:47.553092  479026 system_pods.go:89] "kube-proxy-twn9b" [08589bb9-6320-42da-ad26-0245d9f69104] Running
	I1027 20:04:47.553121  479026 system_pods.go:89] "kube-scheduler-custom-flannel-750423" [7d421f7e-f3ce-4bf1-a88e-614ed0ea606e] Running
	I1027 20:04:47.553134  479026 system_pods.go:89] "storage-provisioner" [23ed61f3-fc3a-43d5-b5d0-c6226de985ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 20:04:47.553149  479026 retry.go:31] will retry after 306.277941ms: missing components: kube-dns
	I1027 20:04:47.863277  479026 system_pods.go:86] 7 kube-system pods found
	I1027 20:04:47.863312  479026 system_pods.go:89] "coredns-66bc5c9577-kljgr" [4fb7da7d-dbef-4a92-b6f8-ec55ae59a2c6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:04:47.863321  479026 system_pods.go:89] "etcd-custom-flannel-750423" [c419de1f-46cc-4907-be43-19a5e64391a6] Running
	I1027 20:04:47.863356  479026 system_pods.go:89] "kube-apiserver-custom-flannel-750423" [de4fdabb-a7f5-478f-9df2-69404d40c2a8] Running
	I1027 20:04:47.863369  479026 system_pods.go:89] "kube-controller-manager-custom-flannel-750423" [66075651-08ba-4a35-b9c2-5e7d904dbbc9] Running
	I1027 20:04:47.863374  479026 system_pods.go:89] "kube-proxy-twn9b" [08589bb9-6320-42da-ad26-0245d9f69104] Running
	I1027 20:04:47.863379  479026 system_pods.go:89] "kube-scheduler-custom-flannel-750423" [7d421f7e-f3ce-4bf1-a88e-614ed0ea606e] Running
	I1027 20:04:47.863383  479026 system_pods.go:89] "storage-provisioner" [23ed61f3-fc3a-43d5-b5d0-c6226de985ea] Running
	I1027 20:04:47.863398  479026 retry.go:31] will retry after 577.036216ms: missing components: kube-dns
	I1027 20:04:48.444927  479026 system_pods.go:86] 7 kube-system pods found
	I1027 20:04:48.445012  479026 system_pods.go:89] "coredns-66bc5c9577-kljgr" [4fb7da7d-dbef-4a92-b6f8-ec55ae59a2c6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:04:48.445035  479026 system_pods.go:89] "etcd-custom-flannel-750423" [c419de1f-46cc-4907-be43-19a5e64391a6] Running
	I1027 20:04:48.445081  479026 system_pods.go:89] "kube-apiserver-custom-flannel-750423" [de4fdabb-a7f5-478f-9df2-69404d40c2a8] Running
	I1027 20:04:48.445110  479026 system_pods.go:89] "kube-controller-manager-custom-flannel-750423" [66075651-08ba-4a35-b9c2-5e7d904dbbc9] Running
	I1027 20:04:48.445138  479026 system_pods.go:89] "kube-proxy-twn9b" [08589bb9-6320-42da-ad26-0245d9f69104] Running
	I1027 20:04:48.445165  479026 system_pods.go:89] "kube-scheduler-custom-flannel-750423" [7d421f7e-f3ce-4bf1-a88e-614ed0ea606e] Running
	I1027 20:04:48.445194  479026 system_pods.go:89] "storage-provisioner" [23ed61f3-fc3a-43d5-b5d0-c6226de985ea] Running
	I1027 20:04:48.445234  479026 retry.go:31] will retry after 589.043067ms: missing components: kube-dns
	I1027 20:04:49.037356  479026 system_pods.go:86] 7 kube-system pods found
	I1027 20:04:49.037389  479026 system_pods.go:89] "coredns-66bc5c9577-kljgr" [4fb7da7d-dbef-4a92-b6f8-ec55ae59a2c6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:04:49.037396  479026 system_pods.go:89] "etcd-custom-flannel-750423" [c419de1f-46cc-4907-be43-19a5e64391a6] Running
	I1027 20:04:49.037402  479026 system_pods.go:89] "kube-apiserver-custom-flannel-750423" [de4fdabb-a7f5-478f-9df2-69404d40c2a8] Running
	I1027 20:04:49.037407  479026 system_pods.go:89] "kube-controller-manager-custom-flannel-750423" [66075651-08ba-4a35-b9c2-5e7d904dbbc9] Running
	I1027 20:04:49.037412  479026 system_pods.go:89] "kube-proxy-twn9b" [08589bb9-6320-42da-ad26-0245d9f69104] Running
	I1027 20:04:49.037417  479026 system_pods.go:89] "kube-scheduler-custom-flannel-750423" [7d421f7e-f3ce-4bf1-a88e-614ed0ea606e] Running
	I1027 20:04:49.037421  479026 system_pods.go:89] "storage-provisioner" [23ed61f3-fc3a-43d5-b5d0-c6226de985ea] Running
	I1027 20:04:49.037434  479026 retry.go:31] will retry after 886.298287ms: missing components: kube-dns
	I1027 20:04:49.927486  479026 system_pods.go:86] 7 kube-system pods found
	I1027 20:04:49.927526  479026 system_pods.go:89] "coredns-66bc5c9577-kljgr" [4fb7da7d-dbef-4a92-b6f8-ec55ae59a2c6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:04:49.927533  479026 system_pods.go:89] "etcd-custom-flannel-750423" [c419de1f-46cc-4907-be43-19a5e64391a6] Running
	I1027 20:04:49.927541  479026 system_pods.go:89] "kube-apiserver-custom-flannel-750423" [de4fdabb-a7f5-478f-9df2-69404d40c2a8] Running
	I1027 20:04:49.927546  479026 system_pods.go:89] "kube-controller-manager-custom-flannel-750423" [66075651-08ba-4a35-b9c2-5e7d904dbbc9] Running
	I1027 20:04:49.927550  479026 system_pods.go:89] "kube-proxy-twn9b" [08589bb9-6320-42da-ad26-0245d9f69104] Running
	I1027 20:04:49.927555  479026 system_pods.go:89] "kube-scheduler-custom-flannel-750423" [7d421f7e-f3ce-4bf1-a88e-614ed0ea606e] Running
	I1027 20:04:49.927559  479026 system_pods.go:89] "storage-provisioner" [23ed61f3-fc3a-43d5-b5d0-c6226de985ea] Running
	I1027 20:04:49.927574  479026 retry.go:31] will retry after 1.036806737s: missing components: kube-dns
	I1027 20:04:50.967702  479026 system_pods.go:86] 7 kube-system pods found
	I1027 20:04:50.967735  479026 system_pods.go:89] "coredns-66bc5c9577-kljgr" [4fb7da7d-dbef-4a92-b6f8-ec55ae59a2c6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:04:50.967742  479026 system_pods.go:89] "etcd-custom-flannel-750423" [c419de1f-46cc-4907-be43-19a5e64391a6] Running
	I1027 20:04:50.967751  479026 system_pods.go:89] "kube-apiserver-custom-flannel-750423" [de4fdabb-a7f5-478f-9df2-69404d40c2a8] Running
	I1027 20:04:50.967755  479026 system_pods.go:89] "kube-controller-manager-custom-flannel-750423" [66075651-08ba-4a35-b9c2-5e7d904dbbc9] Running
	I1027 20:04:50.967759  479026 system_pods.go:89] "kube-proxy-twn9b" [08589bb9-6320-42da-ad26-0245d9f69104] Running
	I1027 20:04:50.967763  479026 system_pods.go:89] "kube-scheduler-custom-flannel-750423" [7d421f7e-f3ce-4bf1-a88e-614ed0ea606e] Running
	I1027 20:04:50.967767  479026 system_pods.go:89] "storage-provisioner" [23ed61f3-fc3a-43d5-b5d0-c6226de985ea] Running
	I1027 20:04:50.967779  479026 retry.go:31] will retry after 1.357388986s: missing components: kube-dns
	I1027 20:04:52.328706  479026 system_pods.go:86] 7 kube-system pods found
	I1027 20:04:52.328745  479026 system_pods.go:89] "coredns-66bc5c9577-kljgr" [4fb7da7d-dbef-4a92-b6f8-ec55ae59a2c6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:04:52.328752  479026 system_pods.go:89] "etcd-custom-flannel-750423" [c419de1f-46cc-4907-be43-19a5e64391a6] Running
	I1027 20:04:52.328759  479026 system_pods.go:89] "kube-apiserver-custom-flannel-750423" [de4fdabb-a7f5-478f-9df2-69404d40c2a8] Running
	I1027 20:04:52.328764  479026 system_pods.go:89] "kube-controller-manager-custom-flannel-750423" [66075651-08ba-4a35-b9c2-5e7d904dbbc9] Running
	I1027 20:04:52.328769  479026 system_pods.go:89] "kube-proxy-twn9b" [08589bb9-6320-42da-ad26-0245d9f69104] Running
	I1027 20:04:52.328774  479026 system_pods.go:89] "kube-scheduler-custom-flannel-750423" [7d421f7e-f3ce-4bf1-a88e-614ed0ea606e] Running
	I1027 20:04:52.328778  479026 system_pods.go:89] "storage-provisioner" [23ed61f3-fc3a-43d5-b5d0-c6226de985ea] Running
	I1027 20:04:52.328792  479026 retry.go:31] will retry after 1.82519785s: missing components: kube-dns
	I1027 20:04:54.158495  479026 system_pods.go:86] 7 kube-system pods found
	I1027 20:04:54.158543  479026 system_pods.go:89] "coredns-66bc5c9577-kljgr" [4fb7da7d-dbef-4a92-b6f8-ec55ae59a2c6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:04:54.158552  479026 system_pods.go:89] "etcd-custom-flannel-750423" [c419de1f-46cc-4907-be43-19a5e64391a6] Running
	I1027 20:04:54.158560  479026 system_pods.go:89] "kube-apiserver-custom-flannel-750423" [de4fdabb-a7f5-478f-9df2-69404d40c2a8] Running
	I1027 20:04:54.158571  479026 system_pods.go:89] "kube-controller-manager-custom-flannel-750423" [66075651-08ba-4a35-b9c2-5e7d904dbbc9] Running
	I1027 20:04:54.158579  479026 system_pods.go:89] "kube-proxy-twn9b" [08589bb9-6320-42da-ad26-0245d9f69104] Running
	I1027 20:04:54.158584  479026 system_pods.go:89] "kube-scheduler-custom-flannel-750423" [7d421f7e-f3ce-4bf1-a88e-614ed0ea606e] Running
	I1027 20:04:54.158588  479026 system_pods.go:89] "storage-provisioner" [23ed61f3-fc3a-43d5-b5d0-c6226de985ea] Running
	I1027 20:04:54.158603  479026 retry.go:31] will retry after 1.611710883s: missing components: kube-dns
	I1027 20:04:55.774619  479026 system_pods.go:86] 7 kube-system pods found
	I1027 20:04:55.774660  479026 system_pods.go:89] "coredns-66bc5c9577-kljgr" [4fb7da7d-dbef-4a92-b6f8-ec55ae59a2c6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:04:55.774667  479026 system_pods.go:89] "etcd-custom-flannel-750423" [c419de1f-46cc-4907-be43-19a5e64391a6] Running
	I1027 20:04:55.774675  479026 system_pods.go:89] "kube-apiserver-custom-flannel-750423" [de4fdabb-a7f5-478f-9df2-69404d40c2a8] Running
	I1027 20:04:55.774679  479026 system_pods.go:89] "kube-controller-manager-custom-flannel-750423" [66075651-08ba-4a35-b9c2-5e7d904dbbc9] Running
	I1027 20:04:55.774683  479026 system_pods.go:89] "kube-proxy-twn9b" [08589bb9-6320-42da-ad26-0245d9f69104] Running
	I1027 20:04:55.774688  479026 system_pods.go:89] "kube-scheduler-custom-flannel-750423" [7d421f7e-f3ce-4bf1-a88e-614ed0ea606e] Running
	I1027 20:04:55.774693  479026 system_pods.go:89] "storage-provisioner" [23ed61f3-fc3a-43d5-b5d0-c6226de985ea] Running
	I1027 20:04:55.774707  479026 retry.go:31] will retry after 2.083756924s: missing components: kube-dns
	
	
	==> CRI-O <==
	Oct 27 20:04:37 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:37.297475386Z" level=info msg="Removing container: 3b21c5ea313fed73e770889f8361d63d255fa877d5227dbb3a02f14257e78c0d" id=1a956f45-90c4-4ad1-b6ac-9bea2385a71d name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 20:04:37 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:37.303470796Z" level=info msg="Error loading conmon cgroup of container 3b21c5ea313fed73e770889f8361d63d255fa877d5227dbb3a02f14257e78c0d: cgroup deleted" id=1a956f45-90c4-4ad1-b6ac-9bea2385a71d name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 20:04:37 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:37.306428695Z" level=info msg="Removed container 3b21c5ea313fed73e770889f8361d63d255fa877d5227dbb3a02f14257e78c0d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4kn22/dashboard-metrics-scraper" id=1a956f45-90c4-4ad1-b6ac-9bea2385a71d name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 20:04:43 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:43.164208804Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 20:04:43 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:43.167319347Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 20:04:43 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:43.167353849Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 20:04:43 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:43.167375518Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 20:04:43 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:43.170838297Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 20:04:43 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:43.170870017Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 20:04:43 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:43.170893245Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 20:04:43 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:43.17382959Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 20:04:43 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:43.173863255Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 20:04:43 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:43.173885982Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 20:04:43 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:43.177234673Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 20:04:43 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:43.177265515Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 20:04:57 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:57.97843853Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a55cd66f-bfac-4c41-b991-491347f70fe5 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 20:04:57 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:57.98471998Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=dea449c8-60a2-496b-a424-a641307cf19d name=/runtime.v1.ImageService/ImageStatus
	Oct 27 20:04:57 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:57.986942982Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4kn22/dashboard-metrics-scraper" id=f06b8d50-e846-4d37-ba54-2ed912d5a048 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 20:04:57 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:57.987099187Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:04:57 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:57.994734191Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:04:57 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:57.995599028Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:04:58 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:58.017851553Z" level=info msg="Created container a1c86c4e14c18a2f8229ec50c7b75a1fc3ad0e89f2e341db55d3435d58734e1e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4kn22/dashboard-metrics-scraper" id=f06b8d50-e846-4d37-ba54-2ed912d5a048 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 20:04:58 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:58.025895652Z" level=info msg="Starting container: a1c86c4e14c18a2f8229ec50c7b75a1fc3ad0e89f2e341db55d3435d58734e1e" id=31ed0932-903d-4755-b574-495bb22f1a4b name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 20:04:58 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:58.031599387Z" level=info msg="Started container" PID=1762 containerID=a1c86c4e14c18a2f8229ec50c7b75a1fc3ad0e89f2e341db55d3435d58734e1e description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4kn22/dashboard-metrics-scraper id=31ed0932-903d-4755-b574-495bb22f1a4b name=/runtime.v1.RuntimeService/StartContainer sandboxID=ee91aac62a0ff5691bad9ffea37ad3246623795556f789f8dc1af53258b47fc3
	Oct 27 20:04:58 default-k8s-diff-port-073048 conmon[1760]: conmon a1c86c4e14c18a2f8229 <ninfo>: container 1762 exited with status 1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	a1c86c4e14c18       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           3 seconds ago        Exited              dashboard-metrics-scraper   3                   ee91aac62a0ff       dashboard-metrics-scraper-6ffb444bf9-4kn22             kubernetes-dashboard
	2115489a06bb5       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago       Exited              dashboard-metrics-scraper   2                   ee91aac62a0ff       dashboard-metrics-scraper-6ffb444bf9-4kn22             kubernetes-dashboard
	ca3df37f2ff7a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           26 seconds ago       Running             storage-provisioner         2                   8001cdfcfbf04       storage-provisioner                                    kube-system
	7c44bc52e5ff9       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   48 seconds ago       Running             kubernetes-dashboard        0                   44933a58a5c4b       kubernetes-dashboard-855c9754f9-lrj9p                  kubernetes-dashboard
	483b35ee60722       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           58 seconds ago       Running             busybox                     1                   db2f1218c5fa9       busybox                                                default
	33d4c8937c642       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           58 seconds ago       Running             coredns                     1                   8f708a3bb8063       coredns-66bc5c9577-6vc9v                               kube-system
	392cb2b4d36cd       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           58 seconds ago       Exited              storage-provisioner         1                   8001cdfcfbf04       storage-provisioner                                    kube-system
	4a9c8f23bb639       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           58 seconds ago       Running             kindnet-cni                 1                   3e6b761796c6c       kindnet-qc8zw                                          kube-system
	ba49fcd5f05b8       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           58 seconds ago       Running             kube-proxy                  1                   1376f5202b3ec       kube-proxy-dsq46                                       kube-system
	ee7be1ab30d5b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   26afeb77d73fc       etcd-default-k8s-diff-port-073048                      kube-system
	420cfe1b91ebd       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   bd3b380a662fa       kube-scheduler-default-k8s-diff-port-073048            kube-system
	70203b34337df       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   b7c87806a5193       kube-controller-manager-default-k8s-diff-port-073048   kube-system
	47af99655b9f0       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   b0787327c5064       kube-apiserver-default-k8s-diff-port-073048            kube-system
	
	
	==> coredns [33d4c8937c642e9e870f4db040cb94a4ed803df3edc604abb12f7c56ef7a0d44] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35892 - 53406 "HINFO IN 3675442782115736736.5461005420113912273. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.004851926s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-073048
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-073048
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=default-k8s-diff-port-073048
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T20_02_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 20:02:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-073048
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 20:04:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 20:04:51 +0000   Mon, 27 Oct 2025 20:02:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 20:04:51 +0000   Mon, 27 Oct 2025 20:02:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 20:04:51 +0000   Mon, 27 Oct 2025 20:02:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 20:04:51 +0000   Mon, 27 Oct 2025 20:03:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-073048
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                dd93b306-3965-477c-8572-564479b43098
	  Boot ID:                    23ea05b4-8203-4fb9-a84a-5deb4b091cbb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 coredns-66bc5c9577-6vc9v                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m27s
	  kube-system                 etcd-default-k8s-diff-port-073048                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m33s
	  kube-system                 kindnet-qc8zw                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m28s
	  kube-system                 kube-apiserver-default-k8s-diff-port-073048             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-073048    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 kube-proxy-dsq46                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-scheduler-default-k8s-diff-port-073048             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-4kn22              0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-lrj9p                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m25s              kube-proxy       
	  Normal   Starting                 57s                kube-proxy       
	  Normal   NodeHasSufficientPID     2m33s              kubelet          Node default-k8s-diff-port-073048 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 2m33s              kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m33s              kubelet          Node default-k8s-diff-port-073048 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m33s              kubelet          Node default-k8s-diff-port-073048 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 2m33s              kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m28s              node-controller  Node default-k8s-diff-port-073048 event: Registered Node default-k8s-diff-port-073048 in Controller
	  Normal   NodeReady                106s               kubelet          Node default-k8s-diff-port-073048 status is now: NodeReady
	  Normal   Starting                 70s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 70s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  69s (x8 over 70s)  kubelet          Node default-k8s-diff-port-073048 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    69s (x8 over 70s)  kubelet          Node default-k8s-diff-port-073048 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     69s (x8 over 70s)  kubelet          Node default-k8s-diff-port-073048 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                node-controller  Node default-k8s-diff-port-073048 event: Registered Node default-k8s-diff-port-073048 in Controller
	
	
	==> dmesg <==
	[Oct27 19:41] overlayfs: idmapped layers are currently not supported
	[ +28.455216] overlayfs: idmapped layers are currently not supported
	[Oct27 19:42] overlayfs: idmapped layers are currently not supported
	[ +26.490174] overlayfs: idmapped layers are currently not supported
	[Oct27 19:44] overlayfs: idmapped layers are currently not supported
	[Oct27 19:45] overlayfs: idmapped layers are currently not supported
	[Oct27 19:47] overlayfs: idmapped layers are currently not supported
	[Oct27 19:49] overlayfs: idmapped layers are currently not supported
	[ +31.410335] overlayfs: idmapped layers are currently not supported
	[Oct27 19:51] overlayfs: idmapped layers are currently not supported
	[Oct27 19:53] overlayfs: idmapped layers are currently not supported
	[Oct27 19:54] overlayfs: idmapped layers are currently not supported
	[Oct27 19:55] overlayfs: idmapped layers are currently not supported
	[Oct27 19:56] overlayfs: idmapped layers are currently not supported
	[Oct27 19:57] overlayfs: idmapped layers are currently not supported
	[Oct27 19:58] overlayfs: idmapped layers are currently not supported
	[Oct27 19:59] overlayfs: idmapped layers are currently not supported
	[Oct27 20:00] overlayfs: idmapped layers are currently not supported
	[ +41.321877] overlayfs: idmapped layers are currently not supported
	[Oct27 20:01] overlayfs: idmapped layers are currently not supported
	[Oct27 20:02] overlayfs: idmapped layers are currently not supported
	[Oct27 20:03] overlayfs: idmapped layers are currently not supported
	[ +26.735505] overlayfs: idmapped layers are currently not supported
	[ +12.481352] overlayfs: idmapped layers are currently not supported
	[Oct27 20:04] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ee7be1ab30d5b150cf1282ac50a0ab38c89e1cf1e5e6081e3e51571014d16a0b] <==
	{"level":"warn","ts":"2025-10-27T20:03:57.654295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:57.707386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:57.707875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:57.721435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:57.740762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:57.777912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:57.794528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:57.831734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:57.898749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:57.909436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:57.942185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:57.989193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:58.023318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:58.040564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:58.064360Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:58.098260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:58.132719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:58.149027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:58.194863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:58.227946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:58.271334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:58.289466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:58.305956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:58.392069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:58.525361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57462","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:05:01 up  2:47,  0 user,  load average: 4.37, 3.64, 2.94
	Linux default-k8s-diff-port-073048 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4a9c8f23bb6399f3527d0263bd18f2693466d4304ee4f8059f1eb907bc160eab] <==
	I1027 20:04:02.919605       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 20:04:02.919872       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1027 20:04:02.920005       1 main.go:148] setting mtu 1500 for CNI 
	I1027 20:04:02.920018       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 20:04:02.920027       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T20:04:03Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 20:04:03.174894       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 20:04:03.175027       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 20:04:03.175081       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 20:04:03.175997       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1027 20:04:33.164793       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1027 20:04:33.176344       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1027 20:04:33.176441       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1027 20:04:33.176355       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1027 20:04:34.475663       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 20:04:34.475786       1 metrics.go:72] Registering metrics
	I1027 20:04:34.475891       1 controller.go:711] "Syncing nftables rules"
	I1027 20:04:43.163918       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 20:04:43.163976       1 main.go:301] handling current node
	I1027 20:04:53.164563       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 20:04:53.164599       1 main.go:301] handling current node
	
	
	==> kube-apiserver [47af99655b9f01b93fb734ba61f2da0603ae3f7fa9c76d3c797aeb6e931e722d] <==
	I1027 20:04:00.384741       1 autoregister_controller.go:144] Starting autoregister controller
	I1027 20:04:00.384750       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 20:04:00.415202       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 20:04:00.415692       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 20:04:00.432680       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1027 20:04:00.432712       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1027 20:04:00.444821       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1027 20:04:00.451509       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1027 20:04:00.451658       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1027 20:04:00.451902       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1027 20:04:00.475901       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1027 20:04:00.481162       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 20:04:00.490891       1 cache.go:39] Caches are synced for autoregister controller
	E1027 20:04:00.518631       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1027 20:04:00.571892       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 20:04:01.674588       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 20:04:01.957106       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 20:04:02.160333       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 20:04:02.337871       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 20:04:02.413553       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 20:04:03.013276       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.187.110"}
	I1027 20:04:03.111888       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.185.126"}
	I1027 20:04:05.575426       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 20:04:05.624420       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 20:04:05.676035       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [70203b34337df7979e43ac6fb0f4905a8cdc06feeb089b41d83af7060159d8da] <==
	I1027 20:04:05.173637       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1027 20:04:05.174030       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1027 20:04:05.176654       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1027 20:04:05.176762       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1027 20:04:05.176815       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1027 20:04:05.176987       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 20:04:05.182465       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 20:04:05.190779       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 20:04:05.196300       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 20:04:05.204277       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1027 20:04:05.208501       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 20:04:05.211749       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1027 20:04:05.215196       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1027 20:04:05.218247       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 20:04:05.219463       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 20:04:05.219514       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1027 20:04:05.219536       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1027 20:04:05.219682       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 20:04:05.221901       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1027 20:04:05.226172       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1027 20:04:05.232589       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 20:04:05.232741       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 20:04:05.241003       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 20:04:05.241030       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 20:04:05.241038       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [ba49fcd5f05b897c504ab81db54a96c17c13d29d0b5bac3058cf7c87bc70aa26] <==
	I1027 20:04:03.444442       1 server_linux.go:53] "Using iptables proxy"
	I1027 20:04:03.537249       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 20:04:03.638206       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 20:04:03.638251       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1027 20:04:03.638337       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 20:04:03.891399       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 20:04:03.891512       1 server_linux.go:132] "Using iptables Proxier"
	I1027 20:04:03.895804       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 20:04:03.896186       1 server.go:527] "Version info" version="v1.34.1"
	I1027 20:04:03.896657       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 20:04:03.898119       1 config.go:200] "Starting service config controller"
	I1027 20:04:03.898183       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 20:04:03.898223       1 config.go:106] "Starting endpoint slice config controller"
	I1027 20:04:03.898257       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 20:04:03.898295       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 20:04:03.898321       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 20:04:03.899055       1 config.go:309] "Starting node config controller"
	I1027 20:04:03.899103       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 20:04:03.899133       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 20:04:03.999175       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 20:04:03.999243       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 20:04:03.999257       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [420cfe1b91ebd7a517df417e1a1f173e78e7e81a49b61d73bcec204c005ecf7c] <==
	I1027 20:03:59.015461       1 serving.go:386] Generated self-signed cert in-memory
	I1027 20:04:03.353292       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 20:04:03.353334       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 20:04:03.358024       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1027 20:04:03.358132       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1027 20:04:03.358215       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 20:04:03.358447       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 20:04:03.358519       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 20:04:03.358553       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 20:04:03.367598       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 20:04:03.367723       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 20:04:03.459292       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 20:04:03.459419       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1027 20:04:03.459537       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 20:04:06 default-k8s-diff-port-073048 kubelet[774]: I1027 20:04:06.006152     774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6aa8c881-b779-41da-a7c1-29defdba0f2c-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-4kn22\" (UID: \"6aa8c881-b779-41da-a7c1-29defdba0f2c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4kn22"
	Oct 27 20:04:06 default-k8s-diff-port-073048 kubelet[774]: I1027 20:04:06.006216     774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqvf4\" (UniqueName: \"kubernetes.io/projected/a3673ae3-9469-4d0e-9186-0b159e83baa7-kube-api-access-bqvf4\") pod \"kubernetes-dashboard-855c9754f9-lrj9p\" (UID: \"a3673ae3-9469-4d0e-9186-0b159e83baa7\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lrj9p"
	Oct 27 20:04:06 default-k8s-diff-port-073048 kubelet[774]: I1027 20:04:06.006243     774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a3673ae3-9469-4d0e-9186-0b159e83baa7-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-lrj9p\" (UID: \"a3673ae3-9469-4d0e-9186-0b159e83baa7\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lrj9p"
	Oct 27 20:04:06 default-k8s-diff-port-073048 kubelet[774]: I1027 20:04:06.006267     774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89xqd\" (UniqueName: \"kubernetes.io/projected/6aa8c881-b779-41da-a7c1-29defdba0f2c-kube-api-access-89xqd\") pod \"dashboard-metrics-scraper-6ffb444bf9-4kn22\" (UID: \"6aa8c881-b779-41da-a7c1-29defdba0f2c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4kn22"
	Oct 27 20:04:06 default-k8s-diff-port-073048 kubelet[774]: W1027 20:04:06.279086     774 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0d0a6d2c139cfccb44f717c1d4bf1de32b26fb3f98fc7320c9146a486f5ddfbb/crio-44933a58a5c4b385161f99b706b221baf68307467a5fa4c7b89172aecce63993 WatchSource:0}: Error finding container 44933a58a5c4b385161f99b706b221baf68307467a5fa4c7b89172aecce63993: Status 404 returned error can't find the container with id 44933a58a5c4b385161f99b706b221baf68307467a5fa4c7b89172aecce63993
	Oct 27 20:04:19 default-k8s-diff-port-073048 kubelet[774]: I1027 20:04:19.226558     774 scope.go:117] "RemoveContainer" containerID="c5af2f2ce8bb0096770169b94c87cc475253590437e67349aba60a996b9327a3"
	Oct 27 20:04:19 default-k8s-diff-port-073048 kubelet[774]: I1027 20:04:19.246653     774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lrj9p" podStartSLOduration=7.555404663 podStartE2EDuration="14.24663376s" podCreationTimestamp="2025-10-27 20:04:05 +0000 UTC" firstStartedPulling="2025-10-27 20:04:06.282679865 +0000 UTC m=+14.623201146" lastFinishedPulling="2025-10-27 20:04:12.97390897 +0000 UTC m=+21.314430243" observedRunningTime="2025-10-27 20:04:13.216595021 +0000 UTC m=+21.557116293" watchObservedRunningTime="2025-10-27 20:04:19.24663376 +0000 UTC m=+27.587155033"
	Oct 27 20:04:20 default-k8s-diff-port-073048 kubelet[774]: I1027 20:04:20.230453     774 scope.go:117] "RemoveContainer" containerID="c5af2f2ce8bb0096770169b94c87cc475253590437e67349aba60a996b9327a3"
	Oct 27 20:04:20 default-k8s-diff-port-073048 kubelet[774]: I1027 20:04:20.231283     774 scope.go:117] "RemoveContainer" containerID="3b21c5ea313fed73e770889f8361d63d255fa877d5227dbb3a02f14257e78c0d"
	Oct 27 20:04:20 default-k8s-diff-port-073048 kubelet[774]: E1027 20:04:20.231516     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4kn22_kubernetes-dashboard(6aa8c881-b779-41da-a7c1-29defdba0f2c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4kn22" podUID="6aa8c881-b779-41da-a7c1-29defdba0f2c"
	Oct 27 20:04:21 default-k8s-diff-port-073048 kubelet[774]: I1027 20:04:21.235229     774 scope.go:117] "RemoveContainer" containerID="3b21c5ea313fed73e770889f8361d63d255fa877d5227dbb3a02f14257e78c0d"
	Oct 27 20:04:21 default-k8s-diff-port-073048 kubelet[774]: E1027 20:04:21.235409     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4kn22_kubernetes-dashboard(6aa8c881-b779-41da-a7c1-29defdba0f2c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4kn22" podUID="6aa8c881-b779-41da-a7c1-29defdba0f2c"
	Oct 27 20:04:26 default-k8s-diff-port-073048 kubelet[774]: I1027 20:04:26.192113     774 scope.go:117] "RemoveContainer" containerID="3b21c5ea313fed73e770889f8361d63d255fa877d5227dbb3a02f14257e78c0d"
	Oct 27 20:04:26 default-k8s-diff-port-073048 kubelet[774]: E1027 20:04:26.192307     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4kn22_kubernetes-dashboard(6aa8c881-b779-41da-a7c1-29defdba0f2c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4kn22" podUID="6aa8c881-b779-41da-a7c1-29defdba0f2c"
	Oct 27 20:04:34 default-k8s-diff-port-073048 kubelet[774]: I1027 20:04:34.268086     774 scope.go:117] "RemoveContainer" containerID="392cb2b4d36cd351dcd3237b475b909427deb573bd6d67128500918d61224f5a"
	Oct 27 20:04:36 default-k8s-diff-port-073048 kubelet[774]: I1027 20:04:36.977388     774 scope.go:117] "RemoveContainer" containerID="3b21c5ea313fed73e770889f8361d63d255fa877d5227dbb3a02f14257e78c0d"
	Oct 27 20:04:37 default-k8s-diff-port-073048 kubelet[774]: I1027 20:04:37.287815     774 scope.go:117] "RemoveContainer" containerID="3b21c5ea313fed73e770889f8361d63d255fa877d5227dbb3a02f14257e78c0d"
	Oct 27 20:04:37 default-k8s-diff-port-073048 kubelet[774]: I1027 20:04:37.288115     774 scope.go:117] "RemoveContainer" containerID="2115489a06bb5e5da51f7bed723595a80cff36c03176614c6ce4478ce4e468e7"
	Oct 27 20:04:37 default-k8s-diff-port-073048 kubelet[774]: E1027 20:04:37.288270     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4kn22_kubernetes-dashboard(6aa8c881-b779-41da-a7c1-29defdba0f2c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4kn22" podUID="6aa8c881-b779-41da-a7c1-29defdba0f2c"
	Oct 27 20:04:46 default-k8s-diff-port-073048 kubelet[774]: I1027 20:04:46.193733     774 scope.go:117] "RemoveContainer" containerID="2115489a06bb5e5da51f7bed723595a80cff36c03176614c6ce4478ce4e468e7"
	Oct 27 20:04:46 default-k8s-diff-port-073048 kubelet[774]: E1027 20:04:46.193967     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4kn22_kubernetes-dashboard(6aa8c881-b779-41da-a7c1-29defdba0f2c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4kn22" podUID="6aa8c881-b779-41da-a7c1-29defdba0f2c"
	Oct 27 20:04:57 default-k8s-diff-port-073048 kubelet[774]: I1027 20:04:57.977370     774 scope.go:117] "RemoveContainer" containerID="2115489a06bb5e5da51f7bed723595a80cff36c03176614c6ce4478ce4e468e7"
	Oct 27 20:04:58 default-k8s-diff-port-073048 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 20:04:58 default-k8s-diff-port-073048 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 20:04:58 default-k8s-diff-port-073048 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [7c44bc52e5ff96e93a4c96064dd09128b39f114debcf592f489e9ef5f042766b] <==
	2025/10/27 20:04:13 Using namespace: kubernetes-dashboard
	2025/10/27 20:04:13 Using in-cluster config to connect to apiserver
	2025/10/27 20:04:13 Using secret token for csrf signing
	2025/10/27 20:04:13 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/27 20:04:13 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/27 20:04:13 Successful initial request to the apiserver, version: v1.34.1
	2025/10/27 20:04:13 Generating JWE encryption key
	2025/10/27 20:04:13 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/27 20:04:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/27 20:04:13 Initializing JWE encryption key from synchronized object
	2025/10/27 20:04:13 Creating in-cluster Sidecar client
	2025/10/27 20:04:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 20:04:13 Serving insecurely on HTTP port: 9090
	2025/10/27 20:04:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 20:04:13 Starting overwatch
	
	
	==> storage-provisioner [392cb2b4d36cd351dcd3237b475b909427deb573bd6d67128500918d61224f5a] <==
	I1027 20:04:03.264465       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1027 20:04:33.267670       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ca3df37f2ff7a2e1fc49a86c023182e51049920e03c757f37b9467c42e204794] <==
	I1027 20:04:34.368716       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 20:04:34.369617       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1027 20:04:34.374453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:04:37.832498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:04:42.093733       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:04:45.693240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:04:48.748710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:04:51.773292       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:04:51.779755       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 20:04:51.779905       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 20:04:51.780069       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-073048_99b8e111-e31f-44bd-9d1a-b5e449d12627!
	I1027 20:04:51.780207       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"19a71778-5006-4e48-afac-9e5dd7131511", APIVersion:"v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-073048_99b8e111-e31f-44bd-9d1a-b5e449d12627 became leader
	W1027 20:04:51.782506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:04:51.796813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 20:04:51.883343       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-073048_99b8e111-e31f-44bd-9d1a-b5e449d12627!
	W1027 20:04:53.799999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:04:53.804391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:04:55.807486       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:04:55.811995       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:04:57.815628       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:04:57.822923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:04:59.826538       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:04:59.835268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:05:01.838808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:05:01.845307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-073048 -n default-k8s-diff-port-073048
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-073048 -n default-k8s-diff-port-073048: exit status 2 (435.292785ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-073048 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-073048
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-073048:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0d0a6d2c139cfccb44f717c1d4bf1de32b26fb3f98fc7320c9146a486f5ddfbb",
	        "Created": "2025-10-27T20:02:05.981897269Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 476059,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T20:03:43.990427612Z",
	            "FinishedAt": "2025-10-27T20:03:42.766648605Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/0d0a6d2c139cfccb44f717c1d4bf1de32b26fb3f98fc7320c9146a486f5ddfbb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0d0a6d2c139cfccb44f717c1d4bf1de32b26fb3f98fc7320c9146a486f5ddfbb/hostname",
	        "HostsPath": "/var/lib/docker/containers/0d0a6d2c139cfccb44f717c1d4bf1de32b26fb3f98fc7320c9146a486f5ddfbb/hosts",
	        "LogPath": "/var/lib/docker/containers/0d0a6d2c139cfccb44f717c1d4bf1de32b26fb3f98fc7320c9146a486f5ddfbb/0d0a6d2c139cfccb44f717c1d4bf1de32b26fb3f98fc7320c9146a486f5ddfbb-json.log",
	        "Name": "/default-k8s-diff-port-073048",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-073048:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-073048",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0d0a6d2c139cfccb44f717c1d4bf1de32b26fb3f98fc7320c9146a486f5ddfbb",
	                "LowerDir": "/var/lib/docker/overlay2/7b35ec25c436fa4e076bd86f8611b944f0fb1a3f41e654812ca0a32f05e4bb57-init/diff:/var/lib/docker/overlay2/0567218bbe05e8b59b2aef7ad82032d6716040c1f9d9d73e7de8682101079f76/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7b35ec25c436fa4e076bd86f8611b944f0fb1a3f41e654812ca0a32f05e4bb57/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7b35ec25c436fa4e076bd86f8611b944f0fb1a3f41e654812ca0a32f05e4bb57/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7b35ec25c436fa4e076bd86f8611b944f0fb1a3f41e654812ca0a32f05e4bb57/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-073048",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-073048/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-073048",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-073048",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-073048",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "91263f5cf3ef077725086a102c438818557a85ae26f91c4751784162e0b1d10d",
	            "SandboxKey": "/var/run/docker/netns/91263f5cf3ef",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-073048": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:dd:1a:1a:95:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "693360b70a0a6dc4cb15a9fc19e2d3b83d1fde9de38ebc7c4ce28555e19407c1",
	                    "EndpointID": "de4961e59fcb32da19ce4be6e3743ffd1514f92f86f4d3e01a8747fc10ff25eb",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-073048",
	                        "0d0a6d2c139c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-073048 -n default-k8s-diff-port-073048
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-073048 -n default-k8s-diff-port-073048: exit status 2 (508.767774ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-073048 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-073048 logs -n 25: (1.773278707s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p no-preload-300878                                                                                                                                                                                                                          │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:01 UTC │
	│ delete  │ -p no-preload-300878                                                                                                                                                                                                                          │ no-preload-300878            │ jenkins │ v1.37.0 │ 27 Oct 25 20:01 UTC │ 27 Oct 25 20:01 UTC │
	│ delete  │ -p disable-driver-mounts-230052                                                                                                                                                                                                               │ disable-driver-mounts-230052 │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │ 27 Oct 25 20:02 UTC │
	│ start   │ -p default-k8s-diff-port-073048 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-073048 │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │ 27 Oct 25 20:03 UTC │
	│ image   │ embed-certs-629838 image list --format=json                                                                                                                                                                                                   │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │ 27 Oct 25 20:02 UTC │
	│ pause   │ -p embed-certs-629838 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │                     │
	│ delete  │ -p embed-certs-629838                                                                                                                                                                                                                         │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │ 27 Oct 25 20:02 UTC │
	│ delete  │ -p embed-certs-629838                                                                                                                                                                                                                         │ embed-certs-629838           │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │ 27 Oct 25 20:02 UTC │
	│ start   │ -p newest-cni-702588 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-702588            │ jenkins │ v1.37.0 │ 27 Oct 25 20:02 UTC │ 27 Oct 25 20:03 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-073048 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-073048 │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-702588 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-702588            │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │                     │
	│ stop    │ -p newest-cni-702588 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-702588            │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │ 27 Oct 25 20:03 UTC │
	│ stop    │ -p default-k8s-diff-port-073048 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-073048 │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │ 27 Oct 25 20:03 UTC │
	│ addons  │ enable dashboard -p newest-cni-702588 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-702588            │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │ 27 Oct 25 20:03 UTC │
	│ start   │ -p newest-cni-702588 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-702588            │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │ 27 Oct 25 20:03 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-073048 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-073048 │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │ 27 Oct 25 20:03 UTC │
	│ start   │ -p default-k8s-diff-port-073048 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-073048 │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │ 27 Oct 25 20:04 UTC │
	│ image   │ newest-cni-702588 image list --format=json                                                                                                                                                                                                    │ newest-cni-702588            │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │ 27 Oct 25 20:03 UTC │
	│ pause   │ -p newest-cni-702588 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-702588            │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │                     │
	│ delete  │ -p newest-cni-702588                                                                                                                                                                                                                          │ newest-cni-702588            │ jenkins │ v1.37.0 │ 27 Oct 25 20:03 UTC │ 27 Oct 25 20:04 UTC │
	│ delete  │ -p newest-cni-702588                                                                                                                                                                                                                          │ newest-cni-702588            │ jenkins │ v1.37.0 │ 27 Oct 25 20:04 UTC │ 27 Oct 25 20:04 UTC │
	│ start   │ -p custom-flannel-750423 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio                                                                            │ custom-flannel-750423        │ jenkins │ v1.37.0 │ 27 Oct 25 20:04 UTC │ 27 Oct 25 20:05 UTC │
	│ image   │ default-k8s-diff-port-073048 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-073048 │ jenkins │ v1.37.0 │ 27 Oct 25 20:04 UTC │ 27 Oct 25 20:04 UTC │
	│ pause   │ -p default-k8s-diff-port-073048 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-073048 │ jenkins │ v1.37.0 │ 27 Oct 25 20:04 UTC │                     │
	│ ssh     │ -p custom-flannel-750423 pgrep -a kubelet                                                                                                                                                                                                     │ custom-flannel-750423        │ jenkins │ v1.37.0 │ 27 Oct 25 20:05 UTC │ 27 Oct 25 20:05 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 20:04:00
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 20:04:00.783019  479026 out.go:360] Setting OutFile to fd 1 ...
	I1027 20:04:00.783184  479026 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:04:00.783196  479026 out.go:374] Setting ErrFile to fd 2...
	I1027 20:04:00.783202  479026 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 20:04:00.783463  479026 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 20:04:00.783896  479026 out.go:368] Setting JSON to false
	I1027 20:04:00.784883  479026 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9993,"bootTime":1761585448,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1027 20:04:00.784954  479026 start.go:141] virtualization:  
	I1027 20:04:00.788356  479026 out.go:179] * [custom-flannel-750423] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 20:04:00.791347  479026 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 20:04:00.791462  479026 notify.go:220] Checking for updates...
	I1027 20:04:00.797197  479026 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 20:04:00.800178  479026 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 20:04:00.803142  479026 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-266035/.minikube
	I1027 20:04:00.805935  479026 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 20:04:00.808800  479026 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 20:04:00.812237  479026 config.go:182] Loaded profile config "default-k8s-diff-port-073048": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:04:00.812349  479026 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 20:04:00.860646  479026 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 20:04:00.860765  479026 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 20:04:00.967018  479026 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-27 20:04:00.957932285 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 20:04:00.967126  479026 docker.go:318] overlay module found
	I1027 20:04:00.970325  479026 out.go:179] * Using the docker driver based on user configuration
	I1027 20:04:00.973105  479026 start.go:305] selected driver: docker
	I1027 20:04:00.973130  479026 start.go:925] validating driver "docker" against <nil>
	I1027 20:04:00.973146  479026 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 20:04:00.973887  479026 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 20:04:01.083634  479026 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-27 20:04:01.070167043 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 20:04:01.083786  479026 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1027 20:04:01.084015  479026 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 20:04:01.086876  479026 out.go:179] * Using Docker driver with root privileges
	I1027 20:04:01.089654  479026 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1027 20:04:01.089694  479026 start_flags.go:336] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1027 20:04:01.089780  479026 start.go:349] cluster config:
	{Name:custom-flannel-750423 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-750423 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 20:04:01.092948  479026 out.go:179] * Starting "custom-flannel-750423" primary control-plane node in "custom-flannel-750423" cluster
	I1027 20:04:01.095758  479026 cache.go:123] Beginning downloading kic base image for docker with crio
	I1027 20:04:01.098593  479026 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 20:04:01.100659  479026 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 20:04:01.100724  479026 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1027 20:04:01.100738  479026 cache.go:58] Caching tarball of preloaded images
	I1027 20:04:01.100827  479026 preload.go:233] Found /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1027 20:04:01.100862  479026 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 20:04:01.100973  479026 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/config.json ...
	I1027 20:04:01.100998  479026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/config.json: {Name:mk725109b4ba9ee7f5cef92c60e855205159cccd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:04:01.101165  479026 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 20:04:01.131715  479026 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 20:04:01.131745  479026 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 20:04:01.131765  479026 cache.go:232] Successfully downloaded all kic artifacts
	I1027 20:04:01.131789  479026 start.go:360] acquireMachinesLock for custom-flannel-750423: {Name:mked453956a4756e2adaba8128a6230e7dd0be3c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 20:04:01.131901  479026 start.go:364] duration metric: took 89.86µs to acquireMachinesLock for "custom-flannel-750423"
	I1027 20:04:01.131933  479026 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-750423 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-750423 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 20:04:01.132007  479026 start.go:125] createHost starting for "" (driver="docker")
	I1027 20:03:59.915295  475934 node_ready.go:49] node "default-k8s-diff-port-073048" is "Ready"
	I1027 20:03:59.915322  475934 node_ready.go:38] duration metric: took 6.447641314s for node "default-k8s-diff-port-073048" to be "Ready" ...
	I1027 20:03:59.915335  475934 api_server.go:52] waiting for apiserver process to appear ...
	I1027 20:03:59.915391  475934 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 20:04:02.979561  475934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.551180389s)
	I1027 20:04:02.979646  475934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.507134932s)
	I1027 20:04:03.124756  475934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.173960698s)
	I1027 20:04:03.124794  475934 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.209386828s)
	I1027 20:04:03.124819  475934 api_server.go:72] duration metric: took 10.215957574s to wait for apiserver process to appear ...
	I1027 20:04:03.124825  475934 api_server.go:88] waiting for apiserver healthz status ...
	I1027 20:04:03.124903  475934 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1027 20:04:03.128284  475934 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-073048 addons enable metrics-server
	
	I1027 20:04:03.131474  475934 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1027 20:04:03.134475  475934 addons.go:514] duration metric: took 10.225259673s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1027 20:04:03.156605  475934 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 20:04:03.156636  475934 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 20:04:01.137166  479026 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1027 20:04:01.137486  479026 start.go:159] libmachine.API.Create for "custom-flannel-750423" (driver="docker")
	I1027 20:04:01.137543  479026 client.go:168] LocalClient.Create starting
	I1027 20:04:01.137624  479026 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem
	I1027 20:04:01.137669  479026 main.go:141] libmachine: Decoding PEM data...
	I1027 20:04:01.137699  479026 main.go:141] libmachine: Parsing certificate...
	I1027 20:04:01.137769  479026 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem
	I1027 20:04:01.137792  479026 main.go:141] libmachine: Decoding PEM data...
	I1027 20:04:01.137804  479026 main.go:141] libmachine: Parsing certificate...
	I1027 20:04:01.138202  479026 cli_runner.go:164] Run: docker network inspect custom-flannel-750423 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1027 20:04:01.166848  479026 cli_runner.go:211] docker network inspect custom-flannel-750423 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1027 20:04:01.166946  479026 network_create.go:284] running [docker network inspect custom-flannel-750423] to gather additional debugging logs...
	I1027 20:04:01.166963  479026 cli_runner.go:164] Run: docker network inspect custom-flannel-750423
	W1027 20:04:01.196382  479026 cli_runner.go:211] docker network inspect custom-flannel-750423 returned with exit code 1
	I1027 20:04:01.196450  479026 network_create.go:287] error running [docker network inspect custom-flannel-750423]: docker network inspect custom-flannel-750423: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-750423 not found
	I1027 20:04:01.196475  479026 network_create.go:289] output of [docker network inspect custom-flannel-750423]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-750423 not found
	
	** /stderr **
	I1027 20:04:01.196576  479026 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 20:04:01.232088  479026 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-74ee89127400 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:aa:c7:29:bd:7a:a9} reservation:<nil>}
	I1027 20:04:01.232543  479026 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-9c57ca829ac8 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:0a:24:08:53:d6:07} reservation:<nil>}
	I1027 20:04:01.232892  479026 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-b7a7c45fd176 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f6:52:e6:cd:c8:a6} reservation:<nil>}
	I1027 20:04:01.233394  479026 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019abb40}
	I1027 20:04:01.233421  479026 network_create.go:124] attempt to create docker network custom-flannel-750423 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1027 20:04:01.233490  479026 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-750423 custom-flannel-750423
	I1027 20:04:01.320633  479026 network_create.go:108] docker network custom-flannel-750423 192.168.76.0/24 created
	I1027 20:04:01.320668  479026 kic.go:121] calculated static IP "192.168.76.2" for the "custom-flannel-750423" container
	I1027 20:04:01.320741  479026 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1027 20:04:01.349042  479026 cli_runner.go:164] Run: docker volume create custom-flannel-750423 --label name.minikube.sigs.k8s.io=custom-flannel-750423 --label created_by.minikube.sigs.k8s.io=true
	I1027 20:04:01.379884  479026 oci.go:103] Successfully created a docker volume custom-flannel-750423
	I1027 20:04:01.379984  479026 cli_runner.go:164] Run: docker run --rm --name custom-flannel-750423-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-750423 --entrypoint /usr/bin/test -v custom-flannel-750423:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1027 20:04:02.079831  479026 oci.go:107] Successfully prepared a docker volume custom-flannel-750423
	I1027 20:04:02.079871  479026 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 20:04:02.079891  479026 kic.go:194] Starting extracting preloaded images to volume ...
	I1027 20:04:02.079971  479026 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v custom-flannel-750423:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1027 20:04:03.625517  475934 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1027 20:04:03.633831  475934 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1027 20:04:03.635036  475934 api_server.go:141] control plane version: v1.34.1
	I1027 20:04:03.635063  475934 api_server.go:131] duration metric: took 510.172833ms to wait for apiserver health ...
	I1027 20:04:03.635072  475934 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 20:04:03.639420  475934 system_pods.go:59] 8 kube-system pods found
	I1027 20:04:03.639499  475934 system_pods.go:61] "coredns-66bc5c9577-6vc9v" [5d420b85-b106-4d91-9ebd-483f8ccfa445] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:04:03.639525  475934 system_pods.go:61] "etcd-default-k8s-diff-port-073048" [13538216-46ea-4c98-a89b-a4d0be59d38b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 20:04:03.639572  475934 system_pods.go:61] "kindnet-qc8zw" [10158916-6994-4c41-ba7d-e5bd80a7fd56] Running
	I1027 20:04:03.639600  475934 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-073048" [7bb63b28-d55e-4f2f-bd52-0dbce0ee5858] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 20:04:03.639661  475934 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-073048" [502a218b-490e-47f8-b946-6d2df9e30913] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 20:04:03.639689  475934 system_pods.go:61] "kube-proxy-dsq46" [91a97ff3-0f9b-41c0-bbec-870515448861] Running
	I1027 20:04:03.639732  475934 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-073048" [b090cbea-1aa4-46ff-a144-7a93a8fdeece] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 20:04:03.639754  475934 system_pods.go:61] "storage-provisioner" [9fef6eaa-5369-4f2d-9dd1-f3d7074b9a77] Running
	I1027 20:04:03.639774  475934 system_pods.go:74] duration metric: took 4.696591ms to wait for pod list to return data ...
	I1027 20:04:03.639795  475934 default_sa.go:34] waiting for default service account to be created ...
	I1027 20:04:03.643039  475934 default_sa.go:45] found service account: "default"
	I1027 20:04:03.643061  475934 default_sa.go:55] duration metric: took 3.233436ms for default service account to be created ...
	I1027 20:04:03.643070  475934 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 20:04:03.647020  475934 system_pods.go:86] 8 kube-system pods found
	I1027 20:04:03.647104  475934 system_pods.go:89] "coredns-66bc5c9577-6vc9v" [5d420b85-b106-4d91-9ebd-483f8ccfa445] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:04:03.647130  475934 system_pods.go:89] "etcd-default-k8s-diff-port-073048" [13538216-46ea-4c98-a89b-a4d0be59d38b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 20:04:03.647169  475934 system_pods.go:89] "kindnet-qc8zw" [10158916-6994-4c41-ba7d-e5bd80a7fd56] Running
	I1027 20:04:03.647198  475934 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-073048" [7bb63b28-d55e-4f2f-bd52-0dbce0ee5858] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 20:04:03.647223  475934 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-073048" [502a218b-490e-47f8-b946-6d2df9e30913] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 20:04:03.647262  475934 system_pods.go:89] "kube-proxy-dsq46" [91a97ff3-0f9b-41c0-bbec-870515448861] Running
	I1027 20:04:03.647291  475934 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-073048" [b090cbea-1aa4-46ff-a144-7a93a8fdeece] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 20:04:03.647314  475934 system_pods.go:89] "storage-provisioner" [9fef6eaa-5369-4f2d-9dd1-f3d7074b9a77] Running
	I1027 20:04:03.647355  475934 system_pods.go:126] duration metric: took 4.278083ms to wait for k8s-apps to be running ...
	I1027 20:04:03.647383  475934 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 20:04:03.647477  475934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 20:04:03.664603  475934 system_svc.go:56] duration metric: took 17.211852ms WaitForService to wait for kubelet
	I1027 20:04:03.664677  475934 kubeadm.go:586] duration metric: took 10.755812811s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 20:04:03.664714  475934 node_conditions.go:102] verifying NodePressure condition ...
	I1027 20:04:03.668523  475934 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 20:04:03.668603  475934 node_conditions.go:123] node cpu capacity is 2
	I1027 20:04:03.668633  475934 node_conditions.go:105] duration metric: took 3.877906ms to run NodePressure ...
	I1027 20:04:03.668676  475934 start.go:241] waiting for startup goroutines ...
	I1027 20:04:03.668701  475934 start.go:246] waiting for cluster config update ...
	I1027 20:04:03.668726  475934 start.go:255] writing updated cluster config ...
	I1027 20:04:03.669080  475934 ssh_runner.go:195] Run: rm -f paused
	I1027 20:04:03.674273  475934 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 20:04:03.682760  475934 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6vc9v" in "kube-system" namespace to be "Ready" or be gone ...
	W1027 20:04:05.720117  475934 pod_ready.go:104] pod "coredns-66bc5c9577-6vc9v" is not "Ready", error: <nil>
	W1027 20:04:08.191255  475934 pod_ready.go:104] pod "coredns-66bc5c9577-6vc9v" is not "Ready", error: <nil>
	I1027 20:04:06.938134  479026 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v custom-flannel-750423:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.858113514s)
	I1027 20:04:06.938164  479026 kic.go:203] duration metric: took 4.858269325s to extract preloaded images to volume ...
	W1027 20:04:06.938304  479026 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1027 20:04:06.938424  479026 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1027 20:04:07.026603  479026 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-flannel-750423 --name custom-flannel-750423 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-750423 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-flannel-750423 --network custom-flannel-750423 --ip 192.168.76.2 --volume custom-flannel-750423:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1027 20:04:07.428705  479026 cli_runner.go:164] Run: docker container inspect custom-flannel-750423 --format={{.State.Running}}
	I1027 20:04:07.453832  479026 cli_runner.go:164] Run: docker container inspect custom-flannel-750423 --format={{.State.Status}}
	I1027 20:04:07.482998  479026 cli_runner.go:164] Run: docker exec custom-flannel-750423 stat /var/lib/dpkg/alternatives/iptables
	I1027 20:04:07.546856  479026 oci.go:144] the created container "custom-flannel-750423" has a running status.
	I1027 20:04:07.546899  479026 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21801-266035/.minikube/machines/custom-flannel-750423/id_rsa...
	I1027 20:04:08.570855  479026 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21801-266035/.minikube/machines/custom-flannel-750423/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1027 20:04:08.599325  479026 cli_runner.go:164] Run: docker container inspect custom-flannel-750423 --format={{.State.Status}}
	I1027 20:04:08.622091  479026 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1027 20:04:08.622115  479026 kic_runner.go:114] Args: [docker exec --privileged custom-flannel-750423 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1027 20:04:08.683808  479026 cli_runner.go:164] Run: docker container inspect custom-flannel-750423 --format={{.State.Status}}
	I1027 20:04:08.704416  479026 machine.go:93] provisionDockerMachine start ...
	I1027 20:04:08.704522  479026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-750423
	I1027 20:04:08.724931  479026 main.go:141] libmachine: Using SSH client type: native
	I1027 20:04:08.725281  479026 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1027 20:04:08.725298  479026 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 20:04:08.725996  479026 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37348->127.0.0.1:33458: read: connection reset by peer
	W1027 20:04:10.688836  475934 pod_ready.go:104] pod "coredns-66bc5c9577-6vc9v" is not "Ready", error: <nil>
	W1027 20:04:12.692747  475934 pod_ready.go:104] pod "coredns-66bc5c9577-6vc9v" is not "Ready", error: <nil>
	I1027 20:04:11.891323  479026 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-750423
	
	I1027 20:04:11.891399  479026 ubuntu.go:182] provisioning hostname "custom-flannel-750423"
	I1027 20:04:11.891498  479026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-750423
	I1027 20:04:11.914514  479026 main.go:141] libmachine: Using SSH client type: native
	I1027 20:04:11.914831  479026 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1027 20:04:11.914847  479026 main.go:141] libmachine: About to run SSH command:
	sudo hostname custom-flannel-750423 && echo "custom-flannel-750423" | sudo tee /etc/hostname
	I1027 20:04:12.099283  479026 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-750423
	
	I1027 20:04:12.099367  479026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-750423
	I1027 20:04:12.125267  479026 main.go:141] libmachine: Using SSH client type: native
	I1027 20:04:12.125579  479026 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1027 20:04:12.125613  479026 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-750423' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-750423/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-750423' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 20:04:12.299770  479026 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 20:04:12.299884  479026 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21801-266035/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-266035/.minikube}
	I1027 20:04:12.299919  479026 ubuntu.go:190] setting up certificates
	I1027 20:04:12.299966  479026 provision.go:84] configureAuth start
	I1027 20:04:12.300068  479026 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-750423
	I1027 20:04:12.330811  479026 provision.go:143] copyHostCerts
	I1027 20:04:12.330889  479026 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem, removing ...
	I1027 20:04:12.330900  479026 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem
	I1027 20:04:12.330975  479026 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/ca.pem (1078 bytes)
	I1027 20:04:12.331120  479026 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem, removing ...
	I1027 20:04:12.331128  479026 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem
	I1027 20:04:12.331159  479026 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/cert.pem (1123 bytes)
	I1027 20:04:12.331226  479026 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem, removing ...
	I1027 20:04:12.331231  479026 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem
	I1027 20:04:12.331254  479026 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-266035/.minikube/key.pem (1675 bytes)
	I1027 20:04:12.331313  479026 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-750423 san=[127.0.0.1 192.168.76.2 custom-flannel-750423 localhost minikube]
	I1027 20:04:12.669150  479026 provision.go:177] copyRemoteCerts
	I1027 20:04:12.669264  479026 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 20:04:12.669348  479026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-750423
	I1027 20:04:12.694304  479026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/custom-flannel-750423/id_rsa Username:docker}
	I1027 20:04:12.805644  479026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1027 20:04:12.831559  479026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 20:04:12.849849  479026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 20:04:12.875785  479026 provision.go:87] duration metric: took 575.786923ms to configureAuth
	I1027 20:04:12.875812  479026 ubuntu.go:206] setting minikube options for container-runtime
	I1027 20:04:12.875995  479026 config.go:182] Loaded profile config "custom-flannel-750423": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:04:12.876104  479026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-750423
	I1027 20:04:12.894371  479026 main.go:141] libmachine: Using SSH client type: native
	I1027 20:04:12.894689  479026 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1027 20:04:12.894709  479026 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 20:04:13.205523  479026 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 20:04:13.205548  479026 machine.go:96] duration metric: took 4.501107167s to provisionDockerMachine
	I1027 20:04:13.205560  479026 client.go:171] duration metric: took 12.068004575s to LocalClient.Create
	I1027 20:04:13.205587  479026 start.go:167] duration metric: took 12.06810452s to libmachine.API.Create "custom-flannel-750423"
	I1027 20:04:13.205598  479026 start.go:293] postStartSetup for "custom-flannel-750423" (driver="docker")
	I1027 20:04:13.205608  479026 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 20:04:13.205681  479026 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 20:04:13.205733  479026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-750423
	I1027 20:04:13.241799  479026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/custom-flannel-750423/id_rsa Username:docker}
	I1027 20:04:13.364907  479026 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 20:04:13.368686  479026 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 20:04:13.368717  479026 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 20:04:13.368729  479026 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-266035/.minikube/addons for local assets ...
	I1027 20:04:13.368781  479026 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-266035/.minikube/files for local assets ...
	I1027 20:04:13.368868  479026 filesync.go:149] local asset: /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem -> 2678802.pem in /etc/ssl/certs
	I1027 20:04:13.368977  479026 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 20:04:13.381033  479026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem --> /etc/ssl/certs/2678802.pem (1708 bytes)
	I1027 20:04:13.407552  479026 start.go:296] duration metric: took 201.937551ms for postStartSetup
	I1027 20:04:13.407933  479026 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-750423
	I1027 20:04:13.430560  479026 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/config.json ...
	I1027 20:04:13.430836  479026 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 20:04:13.430887  479026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-750423
	I1027 20:04:13.460485  479026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/custom-flannel-750423/id_rsa Username:docker}
	I1027 20:04:13.568393  479026 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 20:04:13.574162  479026 start.go:128] duration metric: took 12.442138288s to createHost
	I1027 20:04:13.574190  479026 start.go:83] releasing machines lock for "custom-flannel-750423", held for 12.442275253s
	I1027 20:04:13.574274  479026 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-750423
	I1027 20:04:13.596934  479026 ssh_runner.go:195] Run: cat /version.json
	I1027 20:04:13.596984  479026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-750423
	I1027 20:04:13.597219  479026 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 20:04:13.597284  479026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-750423
	I1027 20:04:13.630603  479026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/custom-flannel-750423/id_rsa Username:docker}
	I1027 20:04:13.642368  479026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/custom-flannel-750423/id_rsa Username:docker}
	I1027 20:04:13.895968  479026 ssh_runner.go:195] Run: systemctl --version
	I1027 20:04:13.903219  479026 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 20:04:13.952373  479026 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 20:04:13.957239  479026 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 20:04:13.957315  479026 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 20:04:13.993426  479026 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1027 20:04:13.993460  479026 start.go:495] detecting cgroup driver to use...
	I1027 20:04:13.993492  479026 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1027 20:04:13.993546  479026 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 20:04:14.023791  479026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 20:04:14.048887  479026 docker.go:218] disabling cri-docker service (if available) ...
	I1027 20:04:14.048957  479026 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 20:04:14.072231  479026 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 20:04:14.098010  479026 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 20:04:14.256873  479026 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 20:04:14.432896  479026 docker.go:234] disabling docker service ...
	I1027 20:04:14.432966  479026 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 20:04:14.480119  479026 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 20:04:14.494745  479026 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 20:04:14.642656  479026 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 20:04:14.787416  479026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 20:04:14.801974  479026 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 20:04:14.818612  479026 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 20:04:14.818723  479026 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:04:14.833820  479026 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1027 20:04:14.833901  479026 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:04:14.845827  479026 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:04:14.857668  479026 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:04:14.866343  479026 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 20:04:14.874699  479026 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:04:14.883844  479026 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:04:14.902228  479026 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 20:04:14.915376  479026 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 20:04:14.928679  479026 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 20:04:14.938997  479026 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:04:15.113477  479026 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 20:04:15.708782  479026 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 20:04:15.708848  479026 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 20:04:15.714067  479026 start.go:563] Will wait 60s for crictl version
	I1027 20:04:15.714188  479026 ssh_runner.go:195] Run: which crictl
	I1027 20:04:15.718595  479026 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 20:04:15.747947  479026 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 20:04:15.748110  479026 ssh_runner.go:195] Run: crio --version
	I1027 20:04:15.782345  479026 ssh_runner.go:195] Run: crio --version
	I1027 20:04:15.817705  479026 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1027 20:04:14.697515  475934 pod_ready.go:104] pod "coredns-66bc5c9577-6vc9v" is not "Ready", error: <nil>
	W1027 20:04:17.200265  475934 pod_ready.go:104] pod "coredns-66bc5c9577-6vc9v" is not "Ready", error: <nil>
	I1027 20:04:15.820891  479026 cli_runner.go:164] Run: docker network inspect custom-flannel-750423 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 20:04:15.838181  479026 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1027 20:04:15.842641  479026 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 20:04:15.853159  479026 kubeadm.go:883] updating cluster {Name:custom-flannel-750423 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-750423 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 20:04:15.853271  479026 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 20:04:15.853330  479026 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 20:04:15.897535  479026 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 20:04:15.897556  479026 crio.go:433] Images already preloaded, skipping extraction
	I1027 20:04:15.897611  479026 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 20:04:15.932838  479026 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 20:04:15.932910  479026 cache_images.go:85] Images are preloaded, skipping loading
	I1027 20:04:15.932934  479026 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1027 20:04:15.933068  479026 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=custom-flannel-750423 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-750423 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I1027 20:04:15.933190  479026 ssh_runner.go:195] Run: crio config
	I1027 20:04:16.022189  479026 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1027 20:04:16.022282  479026 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 20:04:16.022345  479026 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-750423 NodeName:custom-flannel-750423 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 20:04:16.023772  479026 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-750423"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 20:04:16.023928  479026 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 20:04:16.039956  479026 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 20:04:16.040119  479026 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 20:04:16.056369  479026 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	I1027 20:04:16.073205  479026 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 20:04:16.088642  479026 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1027 20:04:16.103722  479026 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1027 20:04:16.110050  479026 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 20:04:16.119843  479026 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:04:16.284666  479026 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 20:04:16.312554  479026 certs.go:69] Setting up /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423 for IP: 192.168.76.2
	I1027 20:04:16.312626  479026 certs.go:195] generating shared ca certs ...
	I1027 20:04:16.312658  479026 certs.go:227] acquiring lock for ca certs: {Name:mk172548fc54811b51040cc0201d01eb4b3dd19b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:04:16.312844  479026 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21801-266035/.minikube/ca.key
	I1027 20:04:16.312952  479026 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.key
	I1027 20:04:16.312981  479026 certs.go:257] generating profile certs ...
	I1027 20:04:16.313082  479026 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/client.key
	I1027 20:04:16.313125  479026 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/client.crt with IP's: []
	I1027 20:04:16.644509  479026 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/client.crt ...
	I1027 20:04:16.644544  479026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/client.crt: {Name:mkb007552fda2a65d09cfbc07999f44d0ad5077f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:04:16.644730  479026 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/client.key ...
	I1027 20:04:16.644747  479026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/client.key: {Name:mk63a68bf534952e069dfe2c5a68b0e310658e3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:04:16.644844  479026 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/apiserver.key.ba20c70b
	I1027 20:04:16.644865  479026 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/apiserver.crt.ba20c70b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1027 20:04:17.686240  479026 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/apiserver.crt.ba20c70b ...
	I1027 20:04:17.686270  479026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/apiserver.crt.ba20c70b: {Name:mkc6d49782361a56a1b9e35dd88f3f3970d27216 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:04:17.686486  479026 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/apiserver.key.ba20c70b ...
	I1027 20:04:17.686500  479026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/apiserver.key.ba20c70b: {Name:mkc49943e9ca4ff6f91d7e0e72a8b7d9fb0f74fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:04:17.686608  479026 certs.go:382] copying /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/apiserver.crt.ba20c70b -> /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/apiserver.crt
	I1027 20:04:17.686695  479026 certs.go:386] copying /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/apiserver.key.ba20c70b -> /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/apiserver.key
	I1027 20:04:17.686754  479026 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/proxy-client.key
	I1027 20:04:17.686769  479026 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/proxy-client.crt with IP's: []
	I1027 20:04:18.581947  479026 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/proxy-client.crt ...
	I1027 20:04:18.581979  479026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/proxy-client.crt: {Name:mk3d4676be849e320b54abf1ba61340565c5056c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:04:18.582189  479026 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/proxy-client.key ...
	I1027 20:04:18.582204  479026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/proxy-client.key: {Name:mk1326034f195b9e342459aa63b1b2b929d6a345 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:04:18.582398  479026 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880.pem (1338 bytes)
	W1027 20:04:18.582439  479026 certs.go:480] ignoring /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880_empty.pem, impossibly tiny 0 bytes
	I1027 20:04:18.582453  479026 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 20:04:18.582477  479026 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/ca.pem (1078 bytes)
	I1027 20:04:18.582507  479026 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/cert.pem (1123 bytes)
	I1027 20:04:18.582527  479026 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/certs/key.pem (1675 bytes)
	I1027 20:04:18.582569  479026 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem (1708 bytes)
	I1027 20:04:18.583213  479026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 20:04:18.602922  479026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 20:04:18.625071  479026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 20:04:18.652190  479026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1027 20:04:18.671796  479026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1027 20:04:18.697537  479026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 20:04:18.715681  479026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 20:04:18.737252  479026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1027 20:04:18.762574  479026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/ssl/certs/2678802.pem --> /usr/share/ca-certificates/2678802.pem (1708 bytes)
	I1027 20:04:18.787355  479026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 20:04:18.824268  479026 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-266035/.minikube/certs/267880.pem --> /usr/share/ca-certificates/267880.pem (1338 bytes)
	I1027 20:04:18.856935  479026 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 20:04:18.872957  479026 ssh_runner.go:195] Run: openssl version
	I1027 20:04:18.887376  479026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 20:04:18.897641  479026 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:04:18.905709  479026 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:57 /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:04:18.905776  479026 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 20:04:18.954483  479026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 20:04:18.963607  479026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/267880.pem && ln -fs /usr/share/ca-certificates/267880.pem /etc/ssl/certs/267880.pem"
	I1027 20:04:18.972591  479026 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/267880.pem
	I1027 20:04:18.976576  479026 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 19:04 /usr/share/ca-certificates/267880.pem
	I1027 20:04:18.976637  479026 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/267880.pem
	I1027 20:04:19.023904  479026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/267880.pem /etc/ssl/certs/51391683.0"
	I1027 20:04:19.034056  479026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2678802.pem && ln -fs /usr/share/ca-certificates/2678802.pem /etc/ssl/certs/2678802.pem"
	I1027 20:04:19.045433  479026 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2678802.pem
	I1027 20:04:19.049728  479026 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 19:04 /usr/share/ca-certificates/2678802.pem
	I1027 20:04:19.049886  479026 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2678802.pem
	I1027 20:04:19.098688  479026 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2678802.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 20:04:19.106935  479026 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 20:04:19.110543  479026 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 20:04:19.110607  479026 kubeadm.go:400] StartCluster: {Name:custom-flannel-750423 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-750423 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSL
og:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 20:04:19.110683  479026 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 20:04:19.110739  479026 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 20:04:19.141885  479026 cri.go:89] found id: ""
	I1027 20:04:19.141963  479026 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 20:04:19.149841  479026 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 20:04:19.158027  479026 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1027 20:04:19.158095  479026 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 20:04:19.166030  479026 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 20:04:19.166061  479026 kubeadm.go:157] found existing configuration files:
	
	I1027 20:04:19.166114  479026 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 20:04:19.173934  479026 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 20:04:19.174050  479026 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 20:04:19.181583  479026 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 20:04:19.190781  479026 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 20:04:19.190847  479026 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 20:04:19.198374  479026 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 20:04:19.206361  479026 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 20:04:19.206428  479026 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 20:04:19.214236  479026 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 20:04:19.222507  479026 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 20:04:19.222651  479026 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 20:04:19.233722  479026 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 20:04:19.296515  479026 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1027 20:04:19.296793  479026 kubeadm.go:318] [preflight] Running pre-flight checks
	I1027 20:04:19.336339  479026 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1027 20:04:19.336499  479026 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1027 20:04:19.336599  479026 kubeadm.go:318] OS: Linux
	I1027 20:04:19.336672  479026 kubeadm.go:318] CGROUPS_CPU: enabled
	I1027 20:04:19.336753  479026 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1027 20:04:19.336828  479026 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1027 20:04:19.336915  479026 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1027 20:04:19.336992  479026 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1027 20:04:19.337076  479026 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1027 20:04:19.337152  479026 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1027 20:04:19.337237  479026 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1027 20:04:19.337323  479026 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1027 20:04:19.406323  479026 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 20:04:19.406517  479026 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 20:04:19.406675  479026 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 20:04:19.414695  479026 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 20:04:19.419948  479026 out.go:252]   - Generating certificates and keys ...
	I1027 20:04:19.420115  479026 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1027 20:04:19.420229  479026 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	W1027 20:04:19.689626  475934 pod_ready.go:104] pod "coredns-66bc5c9577-6vc9v" is not "Ready", error: <nil>
	W1027 20:04:22.189874  475934 pod_ready.go:104] pod "coredns-66bc5c9577-6vc9v" is not "Ready", error: <nil>
	I1027 20:04:20.787921  479026 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 20:04:21.732418  479026 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1027 20:04:21.912671  479026 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1027 20:04:22.463443  479026 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1027 20:04:22.819798  479026 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1027 20:04:22.820131  479026 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-750423 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1027 20:04:23.104941  479026 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1027 20:04:23.105263  479026 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-750423 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1027 20:04:23.789703  479026 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 20:04:24.091142  479026 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 20:04:25.068370  479026 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1027 20:04:25.069418  479026 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 20:04:25.123816  479026 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 20:04:25.609692  479026 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 20:04:25.912002  479026 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 20:04:26.176111  479026 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 20:04:26.549009  479026 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 20:04:26.549578  479026 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 20:04:26.552811  479026 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1027 20:04:24.201476  475934 pod_ready.go:104] pod "coredns-66bc5c9577-6vc9v" is not "Ready", error: <nil>
	W1027 20:04:26.690337  475934 pod_ready.go:104] pod "coredns-66bc5c9577-6vc9v" is not "Ready", error: <nil>
	I1027 20:04:26.556203  479026 out.go:252]   - Booting up control plane ...
	I1027 20:04:26.556313  479026 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 20:04:26.556395  479026 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 20:04:26.556483  479026 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 20:04:26.573404  479026 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 20:04:26.573819  479026 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 20:04:26.582829  479026 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 20:04:26.583241  479026 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 20:04:26.583301  479026 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1027 20:04:26.713124  479026 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 20:04:26.713248  479026 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 20:04:28.214394  479026 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501573003s
	I1027 20:04:28.218580  479026 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 20:04:28.218679  479026 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1027 20:04:28.219001  479026 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 20:04:28.219091  479026 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1027 20:04:28.690520  475934 pod_ready.go:104] pod "coredns-66bc5c9577-6vc9v" is not "Ready", error: <nil>
	W1027 20:04:31.188383  475934 pod_ready.go:104] pod "coredns-66bc5c9577-6vc9v" is not "Ready", error: <nil>
	W1027 20:04:33.188535  475934 pod_ready.go:104] pod "coredns-66bc5c9577-6vc9v" is not "Ready", error: <nil>
	I1027 20:04:31.739854  479026 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.520902586s
	I1027 20:04:32.987681  479026 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.76905092s
	I1027 20:04:34.720927  479026 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.50223854s
	I1027 20:04:34.740659  479026 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 20:04:34.755673  479026 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 20:04:34.769595  479026 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 20:04:34.769807  479026 kubeadm.go:318] [mark-control-plane] Marking the node custom-flannel-750423 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 20:04:34.785508  479026 kubeadm.go:318] [bootstrap-token] Using token: n0aaew.wh2sltsd3ngbl12t
	I1027 20:04:34.788823  479026 out.go:252]   - Configuring RBAC rules ...
	I1027 20:04:34.788949  479026 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 20:04:34.793160  479026 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 20:04:34.802935  479026 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 20:04:34.807430  479026 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 20:04:34.815913  479026 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 20:04:34.820870  479026 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 20:04:35.127902  479026 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 20:04:35.600296  479026 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1027 20:04:36.128162  479026 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1027 20:04:36.129263  479026 kubeadm.go:318] 
	I1027 20:04:36.129348  479026 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1027 20:04:36.129359  479026 kubeadm.go:318] 
	I1027 20:04:36.129436  479026 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1027 20:04:36.129452  479026 kubeadm.go:318] 
	I1027 20:04:36.129478  479026 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1027 20:04:36.129539  479026 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 20:04:36.129593  479026 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 20:04:36.129601  479026 kubeadm.go:318] 
	I1027 20:04:36.129654  479026 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1027 20:04:36.129662  479026 kubeadm.go:318] 
	I1027 20:04:36.129709  479026 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 20:04:36.129717  479026 kubeadm.go:318] 
	I1027 20:04:36.129768  479026 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1027 20:04:36.129847  479026 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 20:04:36.129917  479026 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 20:04:36.129925  479026 kubeadm.go:318] 
	I1027 20:04:36.130008  479026 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 20:04:36.130093  479026 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1027 20:04:36.130100  479026 kubeadm.go:318] 
	I1027 20:04:36.130183  479026 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token n0aaew.wh2sltsd3ngbl12t \
	I1027 20:04:36.130288  479026 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:9473f077605f6866eb08c8a712af7c56a0deb8dc2a907ec8cbb4b8ae396e8f1f \
	I1027 20:04:36.130312  479026 kubeadm.go:318] 	--control-plane 
	I1027 20:04:36.130320  479026 kubeadm.go:318] 
	I1027 20:04:36.130403  479026 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1027 20:04:36.130411  479026 kubeadm.go:318] 
	I1027 20:04:36.130492  479026 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token n0aaew.wh2sltsd3ngbl12t \
	I1027 20:04:36.130596  479026 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:9473f077605f6866eb08c8a712af7c56a0deb8dc2a907ec8cbb4b8ae396e8f1f 
	I1027 20:04:36.135190  479026 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1027 20:04:36.135434  479026 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1027 20:04:36.135567  479026 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 20:04:36.135599  479026 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1027 20:04:36.138719  479026 out.go:179] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	W1027 20:04:35.189973  475934 pod_ready.go:104] pod "coredns-66bc5c9577-6vc9v" is not "Ready", error: <nil>
	W1027 20:04:37.687827  475934 pod_ready.go:104] pod "coredns-66bc5c9577-6vc9v" is not "Ready", error: <nil>
	I1027 20:04:36.141499  479026 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 20:04:36.141583  479026 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I1027 20:04:36.146228  479026 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I1027 20:04:36.146257  479026 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4591 bytes)
	I1027 20:04:36.166610  479026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 20:04:36.606543  479026 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 20:04:36.606768  479026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:04:36.606908  479026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-750423 minikube.k8s.io/updated_at=2025_10_27T20_04_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f minikube.k8s.io/name=custom-flannel-750423 minikube.k8s.io/primary=true
	I1027 20:04:36.809856  479026 ops.go:34] apiserver oom_adj: -16
	I1027 20:04:36.809877  479026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:04:37.311034  479026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:04:37.810502  479026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:04:38.310309  479026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:04:38.810443  479026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:04:39.310840  479026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:04:39.810936  479026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:04:40.310724  479026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:04:40.810843  479026 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 20:04:40.920279  479026 kubeadm.go:1113] duration metric: took 4.313566767s to wait for elevateKubeSystemPrivileges
	I1027 20:04:40.920314  479026 kubeadm.go:402] duration metric: took 21.809710962s to StartCluster
	I1027 20:04:40.920332  479026 settings.go:142] acquiring lock: {Name:mk49b9e2e9b7901c6b51e89b8b1e8ee7f9e88107 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:04:40.920403  479026 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 20:04:40.921362  479026 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/kubeconfig: {Name:mk0a03a594f8d9f9199b1c7cff7c550cec8414c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 20:04:40.921585  479026 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 20:04:40.921685  479026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 20:04:40.921932  479026 config.go:182] Loaded profile config "custom-flannel-750423": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 20:04:40.921973  479026 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 20:04:40.922035  479026 addons.go:69] Setting storage-provisioner=true in profile "custom-flannel-750423"
	I1027 20:04:40.922054  479026 addons.go:238] Setting addon storage-provisioner=true in "custom-flannel-750423"
	I1027 20:04:40.922078  479026 host.go:66] Checking if "custom-flannel-750423" exists ...
	I1027 20:04:40.922556  479026 cli_runner.go:164] Run: docker container inspect custom-flannel-750423 --format={{.State.Status}}
	I1027 20:04:40.923175  479026 addons.go:69] Setting default-storageclass=true in profile "custom-flannel-750423"
	I1027 20:04:40.923200  479026 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-750423"
	I1027 20:04:40.923466  479026 cli_runner.go:164] Run: docker container inspect custom-flannel-750423 --format={{.State.Status}}
	I1027 20:04:40.925031  479026 out.go:179] * Verifying Kubernetes components...
	I1027 20:04:40.929097  479026 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 20:04:40.966478  479026 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 20:04:40.970178  479026 addons.go:238] Setting addon default-storageclass=true in "custom-flannel-750423"
	I1027 20:04:40.970217  479026 host.go:66] Checking if "custom-flannel-750423" exists ...
	I1027 20:04:40.970628  479026 cli_runner.go:164] Run: docker container inspect custom-flannel-750423 --format={{.State.Status}}
	I1027 20:04:40.971142  479026 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 20:04:40.971166  479026 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 20:04:40.971222  479026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-750423
	I1027 20:04:41.000996  479026 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 20:04:41.001022  479026 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 20:04:41.001098  479026 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-750423
	I1027 20:04:41.002021  479026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/custom-flannel-750423/id_rsa Username:docker}
	I1027 20:04:41.041739  479026 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/custom-flannel-750423/id_rsa Username:docker}
	I1027 20:04:41.288002  479026 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 20:04:41.294781  479026 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 20:04:41.301510  479026 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 20:04:41.335133  479026 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 20:04:41.947628  479026 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1027 20:04:42.351323  479026 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.056458525s)
	I1027 20:04:42.351387  479026 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.049804894s)
	I1027 20:04:42.351613  479026 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.016413939s)
	I1027 20:04:42.352511  479026 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-750423" to be "Ready" ...
	I1027 20:04:42.376777  479026 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1027 20:04:39.689758  475934 pod_ready.go:104] pod "coredns-66bc5c9577-6vc9v" is not "Ready", error: <nil>
	W1027 20:04:42.191594  475934 pod_ready.go:104] pod "coredns-66bc5c9577-6vc9v" is not "Ready", error: <nil>
	I1027 20:04:44.209570  475934 pod_ready.go:94] pod "coredns-66bc5c9577-6vc9v" is "Ready"
	I1027 20:04:44.209657  475934 pod_ready.go:86] duration metric: took 40.526824135s for pod "coredns-66bc5c9577-6vc9v" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:04:44.216845  475934 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-073048" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:04:44.221259  475934 pod_ready.go:94] pod "etcd-default-k8s-diff-port-073048" is "Ready"
	I1027 20:04:44.221283  475934 pod_ready.go:86] duration metric: took 4.414219ms for pod "etcd-default-k8s-diff-port-073048" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:04:44.225262  475934 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-073048" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:04:44.233009  475934 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-073048" is "Ready"
	I1027 20:04:44.233084  475934 pod_ready.go:86] duration metric: took 7.796566ms for pod "kube-apiserver-default-k8s-diff-port-073048" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:04:44.236071  475934 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-073048" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:04:44.387378  475934 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-073048" is "Ready"
	I1027 20:04:44.387454  475934 pod_ready.go:86] duration metric: took 151.350232ms for pod "kube-controller-manager-default-k8s-diff-port-073048" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:04:44.587876  475934 pod_ready.go:83] waiting for pod "kube-proxy-dsq46" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:04:44.987159  475934 pod_ready.go:94] pod "kube-proxy-dsq46" is "Ready"
	I1027 20:04:44.987197  475934 pod_ready.go:86] duration metric: took 399.247352ms for pod "kube-proxy-dsq46" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:04:45.188987  475934 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-073048" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:04:45.587506  475934 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-073048" is "Ready"
	I1027 20:04:45.587539  475934 pod_ready.go:86] duration metric: took 398.521522ms for pod "kube-scheduler-default-k8s-diff-port-073048" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:04:45.587552  475934 pod_ready.go:40] duration metric: took 41.913192904s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 20:04:45.686793  475934 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 20:04:45.696703  475934 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-073048" cluster and "default" namespace by default
	I1027 20:04:42.380078  479026 addons.go:514] duration metric: took 1.458081161s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1027 20:04:42.451810  479026 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-750423" context rescaled to 1 replicas
	W1027 20:04:44.355985  479026 node_ready.go:57] node "custom-flannel-750423" has "Ready":"False" status (will retry)
	W1027 20:04:46.360558  479026 node_ready.go:57] node "custom-flannel-750423" has "Ready":"False" status (will retry)
	I1027 20:04:46.855237  479026 node_ready.go:49] node "custom-flannel-750423" is "Ready"
	I1027 20:04:46.855282  479026 node_ready.go:38] duration metric: took 4.502733857s for node "custom-flannel-750423" to be "Ready" ...
	I1027 20:04:46.855295  479026 api_server.go:52] waiting for apiserver process to appear ...
	I1027 20:04:46.855361  479026 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 20:04:46.868611  479026 api_server.go:72] duration metric: took 5.946990657s to wait for apiserver process to appear ...
	I1027 20:04:46.868632  479026 api_server.go:88] waiting for apiserver healthz status ...
	I1027 20:04:46.868651  479026 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 20:04:46.876805  479026 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1027 20:04:46.877821  479026 api_server.go:141] control plane version: v1.34.1
	I1027 20:04:46.877846  479026 api_server.go:131] duration metric: took 9.207357ms to wait for apiserver health ...
	I1027 20:04:46.877855  479026 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 20:04:46.880937  479026 system_pods.go:59] 7 kube-system pods found
	I1027 20:04:46.880968  479026 system_pods.go:61] "coredns-66bc5c9577-kljgr" [4fb7da7d-dbef-4a92-b6f8-ec55ae59a2c6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:04:46.880976  479026 system_pods.go:61] "etcd-custom-flannel-750423" [c419de1f-46cc-4907-be43-19a5e64391a6] Running
	I1027 20:04:46.880982  479026 system_pods.go:61] "kube-apiserver-custom-flannel-750423" [de4fdabb-a7f5-478f-9df2-69404d40c2a8] Running
	I1027 20:04:46.880994  479026 system_pods.go:61] "kube-controller-manager-custom-flannel-750423" [66075651-08ba-4a35-b9c2-5e7d904dbbc9] Running
	I1027 20:04:46.881004  479026 system_pods.go:61] "kube-proxy-twn9b" [08589bb9-6320-42da-ad26-0245d9f69104] Running
	I1027 20:04:46.881008  479026 system_pods.go:61] "kube-scheduler-custom-flannel-750423" [7d421f7e-f3ce-4bf1-a88e-614ed0ea606e] Running
	I1027 20:04:46.881017  479026 system_pods.go:61] "storage-provisioner" [23ed61f3-fc3a-43d5-b5d0-c6226de985ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 20:04:46.881023  479026 system_pods.go:74] duration metric: took 3.163317ms to wait for pod list to return data ...
	I1027 20:04:46.881035  479026 default_sa.go:34] waiting for default service account to be created ...
	I1027 20:04:46.883550  479026 default_sa.go:45] found service account: "default"
	I1027 20:04:46.883573  479026 default_sa.go:55] duration metric: took 2.532122ms for default service account to be created ...
	I1027 20:04:46.883581  479026 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 20:04:46.885836  479026 system_pods.go:86] 7 kube-system pods found
	I1027 20:04:46.885867  479026 system_pods.go:89] "coredns-66bc5c9577-kljgr" [4fb7da7d-dbef-4a92-b6f8-ec55ae59a2c6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:04:46.885873  479026 system_pods.go:89] "etcd-custom-flannel-750423" [c419de1f-46cc-4907-be43-19a5e64391a6] Running
	I1027 20:04:46.885879  479026 system_pods.go:89] "kube-apiserver-custom-flannel-750423" [de4fdabb-a7f5-478f-9df2-69404d40c2a8] Running
	I1027 20:04:46.885884  479026 system_pods.go:89] "kube-controller-manager-custom-flannel-750423" [66075651-08ba-4a35-b9c2-5e7d904dbbc9] Running
	I1027 20:04:46.885888  479026 system_pods.go:89] "kube-proxy-twn9b" [08589bb9-6320-42da-ad26-0245d9f69104] Running
	I1027 20:04:46.885923  479026 system_pods.go:89] "kube-scheduler-custom-flannel-750423" [7d421f7e-f3ce-4bf1-a88e-614ed0ea606e] Running
	I1027 20:04:46.885935  479026 system_pods.go:89] "storage-provisioner" [23ed61f3-fc3a-43d5-b5d0-c6226de985ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 20:04:46.885956  479026 retry.go:31] will retry after 296.925999ms: missing components: kube-dns
	I1027 20:04:47.190874  479026 system_pods.go:86] 7 kube-system pods found
	I1027 20:04:47.190976  479026 system_pods.go:89] "coredns-66bc5c9577-kljgr" [4fb7da7d-dbef-4a92-b6f8-ec55ae59a2c6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:04:47.191051  479026 system_pods.go:89] "etcd-custom-flannel-750423" [c419de1f-46cc-4907-be43-19a5e64391a6] Running
	I1027 20:04:47.191083  479026 system_pods.go:89] "kube-apiserver-custom-flannel-750423" [de4fdabb-a7f5-478f-9df2-69404d40c2a8] Running
	I1027 20:04:47.191127  479026 system_pods.go:89] "kube-controller-manager-custom-flannel-750423" [66075651-08ba-4a35-b9c2-5e7d904dbbc9] Running
	I1027 20:04:47.191161  479026 system_pods.go:89] "kube-proxy-twn9b" [08589bb9-6320-42da-ad26-0245d9f69104] Running
	I1027 20:04:47.191199  479026 system_pods.go:89] "kube-scheduler-custom-flannel-750423" [7d421f7e-f3ce-4bf1-a88e-614ed0ea606e] Running
	I1027 20:04:47.191232  479026 system_pods.go:89] "storage-provisioner" [23ed61f3-fc3a-43d5-b5d0-c6226de985ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 20:04:47.191281  479026 retry.go:31] will retry after 357.971788ms: missing components: kube-dns
	I1027 20:04:47.553025  479026 system_pods.go:86] 7 kube-system pods found
	I1027 20:04:47.553067  479026 system_pods.go:89] "coredns-66bc5c9577-kljgr" [4fb7da7d-dbef-4a92-b6f8-ec55ae59a2c6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:04:47.553074  479026 system_pods.go:89] "etcd-custom-flannel-750423" [c419de1f-46cc-4907-be43-19a5e64391a6] Running
	I1027 20:04:47.553082  479026 system_pods.go:89] "kube-apiserver-custom-flannel-750423" [de4fdabb-a7f5-478f-9df2-69404d40c2a8] Running
	I1027 20:04:47.553088  479026 system_pods.go:89] "kube-controller-manager-custom-flannel-750423" [66075651-08ba-4a35-b9c2-5e7d904dbbc9] Running
	I1027 20:04:47.553092  479026 system_pods.go:89] "kube-proxy-twn9b" [08589bb9-6320-42da-ad26-0245d9f69104] Running
	I1027 20:04:47.553121  479026 system_pods.go:89] "kube-scheduler-custom-flannel-750423" [7d421f7e-f3ce-4bf1-a88e-614ed0ea606e] Running
	I1027 20:04:47.553134  479026 system_pods.go:89] "storage-provisioner" [23ed61f3-fc3a-43d5-b5d0-c6226de985ea] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 20:04:47.553149  479026 retry.go:31] will retry after 306.277941ms: missing components: kube-dns
	I1027 20:04:47.863277  479026 system_pods.go:86] 7 kube-system pods found
	I1027 20:04:47.863312  479026 system_pods.go:89] "coredns-66bc5c9577-kljgr" [4fb7da7d-dbef-4a92-b6f8-ec55ae59a2c6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:04:47.863321  479026 system_pods.go:89] "etcd-custom-flannel-750423" [c419de1f-46cc-4907-be43-19a5e64391a6] Running
	I1027 20:04:47.863356  479026 system_pods.go:89] "kube-apiserver-custom-flannel-750423" [de4fdabb-a7f5-478f-9df2-69404d40c2a8] Running
	I1027 20:04:47.863369  479026 system_pods.go:89] "kube-controller-manager-custom-flannel-750423" [66075651-08ba-4a35-b9c2-5e7d904dbbc9] Running
	I1027 20:04:47.863374  479026 system_pods.go:89] "kube-proxy-twn9b" [08589bb9-6320-42da-ad26-0245d9f69104] Running
	I1027 20:04:47.863379  479026 system_pods.go:89] "kube-scheduler-custom-flannel-750423" [7d421f7e-f3ce-4bf1-a88e-614ed0ea606e] Running
	I1027 20:04:47.863383  479026 system_pods.go:89] "storage-provisioner" [23ed61f3-fc3a-43d5-b5d0-c6226de985ea] Running
	I1027 20:04:47.863398  479026 retry.go:31] will retry after 577.036216ms: missing components: kube-dns
	I1027 20:04:48.444927  479026 system_pods.go:86] 7 kube-system pods found
	I1027 20:04:48.445012  479026 system_pods.go:89] "coredns-66bc5c9577-kljgr" [4fb7da7d-dbef-4a92-b6f8-ec55ae59a2c6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:04:48.445035  479026 system_pods.go:89] "etcd-custom-flannel-750423" [c419de1f-46cc-4907-be43-19a5e64391a6] Running
	I1027 20:04:48.445081  479026 system_pods.go:89] "kube-apiserver-custom-flannel-750423" [de4fdabb-a7f5-478f-9df2-69404d40c2a8] Running
	I1027 20:04:48.445110  479026 system_pods.go:89] "kube-controller-manager-custom-flannel-750423" [66075651-08ba-4a35-b9c2-5e7d904dbbc9] Running
	I1027 20:04:48.445138  479026 system_pods.go:89] "kube-proxy-twn9b" [08589bb9-6320-42da-ad26-0245d9f69104] Running
	I1027 20:04:48.445165  479026 system_pods.go:89] "kube-scheduler-custom-flannel-750423" [7d421f7e-f3ce-4bf1-a88e-614ed0ea606e] Running
	I1027 20:04:48.445194  479026 system_pods.go:89] "storage-provisioner" [23ed61f3-fc3a-43d5-b5d0-c6226de985ea] Running
	I1027 20:04:48.445234  479026 retry.go:31] will retry after 589.043067ms: missing components: kube-dns
	I1027 20:04:49.037356  479026 system_pods.go:86] 7 kube-system pods found
	I1027 20:04:49.037389  479026 system_pods.go:89] "coredns-66bc5c9577-kljgr" [4fb7da7d-dbef-4a92-b6f8-ec55ae59a2c6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:04:49.037396  479026 system_pods.go:89] "etcd-custom-flannel-750423" [c419de1f-46cc-4907-be43-19a5e64391a6] Running
	I1027 20:04:49.037402  479026 system_pods.go:89] "kube-apiserver-custom-flannel-750423" [de4fdabb-a7f5-478f-9df2-69404d40c2a8] Running
	I1027 20:04:49.037407  479026 system_pods.go:89] "kube-controller-manager-custom-flannel-750423" [66075651-08ba-4a35-b9c2-5e7d904dbbc9] Running
	I1027 20:04:49.037412  479026 system_pods.go:89] "kube-proxy-twn9b" [08589bb9-6320-42da-ad26-0245d9f69104] Running
	I1027 20:04:49.037417  479026 system_pods.go:89] "kube-scheduler-custom-flannel-750423" [7d421f7e-f3ce-4bf1-a88e-614ed0ea606e] Running
	I1027 20:04:49.037421  479026 system_pods.go:89] "storage-provisioner" [23ed61f3-fc3a-43d5-b5d0-c6226de985ea] Running
	I1027 20:04:49.037434  479026 retry.go:31] will retry after 886.298287ms: missing components: kube-dns
	I1027 20:04:49.927486  479026 system_pods.go:86] 7 kube-system pods found
	I1027 20:04:49.927526  479026 system_pods.go:89] "coredns-66bc5c9577-kljgr" [4fb7da7d-dbef-4a92-b6f8-ec55ae59a2c6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:04:49.927533  479026 system_pods.go:89] "etcd-custom-flannel-750423" [c419de1f-46cc-4907-be43-19a5e64391a6] Running
	I1027 20:04:49.927541  479026 system_pods.go:89] "kube-apiserver-custom-flannel-750423" [de4fdabb-a7f5-478f-9df2-69404d40c2a8] Running
	I1027 20:04:49.927546  479026 system_pods.go:89] "kube-controller-manager-custom-flannel-750423" [66075651-08ba-4a35-b9c2-5e7d904dbbc9] Running
	I1027 20:04:49.927550  479026 system_pods.go:89] "kube-proxy-twn9b" [08589bb9-6320-42da-ad26-0245d9f69104] Running
	I1027 20:04:49.927555  479026 system_pods.go:89] "kube-scheduler-custom-flannel-750423" [7d421f7e-f3ce-4bf1-a88e-614ed0ea606e] Running
	I1027 20:04:49.927559  479026 system_pods.go:89] "storage-provisioner" [23ed61f3-fc3a-43d5-b5d0-c6226de985ea] Running
	I1027 20:04:49.927574  479026 retry.go:31] will retry after 1.036806737s: missing components: kube-dns
	I1027 20:04:50.967702  479026 system_pods.go:86] 7 kube-system pods found
	I1027 20:04:50.967735  479026 system_pods.go:89] "coredns-66bc5c9577-kljgr" [4fb7da7d-dbef-4a92-b6f8-ec55ae59a2c6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:04:50.967742  479026 system_pods.go:89] "etcd-custom-flannel-750423" [c419de1f-46cc-4907-be43-19a5e64391a6] Running
	I1027 20:04:50.967751  479026 system_pods.go:89] "kube-apiserver-custom-flannel-750423" [de4fdabb-a7f5-478f-9df2-69404d40c2a8] Running
	I1027 20:04:50.967755  479026 system_pods.go:89] "kube-controller-manager-custom-flannel-750423" [66075651-08ba-4a35-b9c2-5e7d904dbbc9] Running
	I1027 20:04:50.967759  479026 system_pods.go:89] "kube-proxy-twn9b" [08589bb9-6320-42da-ad26-0245d9f69104] Running
	I1027 20:04:50.967763  479026 system_pods.go:89] "kube-scheduler-custom-flannel-750423" [7d421f7e-f3ce-4bf1-a88e-614ed0ea606e] Running
	I1027 20:04:50.967767  479026 system_pods.go:89] "storage-provisioner" [23ed61f3-fc3a-43d5-b5d0-c6226de985ea] Running
	I1027 20:04:50.967779  479026 retry.go:31] will retry after 1.357388986s: missing components: kube-dns
	I1027 20:04:52.328706  479026 system_pods.go:86] 7 kube-system pods found
	I1027 20:04:52.328745  479026 system_pods.go:89] "coredns-66bc5c9577-kljgr" [4fb7da7d-dbef-4a92-b6f8-ec55ae59a2c6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:04:52.328752  479026 system_pods.go:89] "etcd-custom-flannel-750423" [c419de1f-46cc-4907-be43-19a5e64391a6] Running
	I1027 20:04:52.328759  479026 system_pods.go:89] "kube-apiserver-custom-flannel-750423" [de4fdabb-a7f5-478f-9df2-69404d40c2a8] Running
	I1027 20:04:52.328764  479026 system_pods.go:89] "kube-controller-manager-custom-flannel-750423" [66075651-08ba-4a35-b9c2-5e7d904dbbc9] Running
	I1027 20:04:52.328769  479026 system_pods.go:89] "kube-proxy-twn9b" [08589bb9-6320-42da-ad26-0245d9f69104] Running
	I1027 20:04:52.328774  479026 system_pods.go:89] "kube-scheduler-custom-flannel-750423" [7d421f7e-f3ce-4bf1-a88e-614ed0ea606e] Running
	I1027 20:04:52.328778  479026 system_pods.go:89] "storage-provisioner" [23ed61f3-fc3a-43d5-b5d0-c6226de985ea] Running
	I1027 20:04:52.328792  479026 retry.go:31] will retry after 1.82519785s: missing components: kube-dns
	I1027 20:04:54.158495  479026 system_pods.go:86] 7 kube-system pods found
	I1027 20:04:54.158543  479026 system_pods.go:89] "coredns-66bc5c9577-kljgr" [4fb7da7d-dbef-4a92-b6f8-ec55ae59a2c6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:04:54.158552  479026 system_pods.go:89] "etcd-custom-flannel-750423" [c419de1f-46cc-4907-be43-19a5e64391a6] Running
	I1027 20:04:54.158560  479026 system_pods.go:89] "kube-apiserver-custom-flannel-750423" [de4fdabb-a7f5-478f-9df2-69404d40c2a8] Running
	I1027 20:04:54.158571  479026 system_pods.go:89] "kube-controller-manager-custom-flannel-750423" [66075651-08ba-4a35-b9c2-5e7d904dbbc9] Running
	I1027 20:04:54.158579  479026 system_pods.go:89] "kube-proxy-twn9b" [08589bb9-6320-42da-ad26-0245d9f69104] Running
	I1027 20:04:54.158584  479026 system_pods.go:89] "kube-scheduler-custom-flannel-750423" [7d421f7e-f3ce-4bf1-a88e-614ed0ea606e] Running
	I1027 20:04:54.158588  479026 system_pods.go:89] "storage-provisioner" [23ed61f3-fc3a-43d5-b5d0-c6226de985ea] Running
	I1027 20:04:54.158603  479026 retry.go:31] will retry after 1.611710883s: missing components: kube-dns
	I1027 20:04:55.774619  479026 system_pods.go:86] 7 kube-system pods found
	I1027 20:04:55.774660  479026 system_pods.go:89] "coredns-66bc5c9577-kljgr" [4fb7da7d-dbef-4a92-b6f8-ec55ae59a2c6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:04:55.774667  479026 system_pods.go:89] "etcd-custom-flannel-750423" [c419de1f-46cc-4907-be43-19a5e64391a6] Running
	I1027 20:04:55.774675  479026 system_pods.go:89] "kube-apiserver-custom-flannel-750423" [de4fdabb-a7f5-478f-9df2-69404d40c2a8] Running
	I1027 20:04:55.774679  479026 system_pods.go:89] "kube-controller-manager-custom-flannel-750423" [66075651-08ba-4a35-b9c2-5e7d904dbbc9] Running
	I1027 20:04:55.774683  479026 system_pods.go:89] "kube-proxy-twn9b" [08589bb9-6320-42da-ad26-0245d9f69104] Running
	I1027 20:04:55.774688  479026 system_pods.go:89] "kube-scheduler-custom-flannel-750423" [7d421f7e-f3ce-4bf1-a88e-614ed0ea606e] Running
	I1027 20:04:55.774693  479026 system_pods.go:89] "storage-provisioner" [23ed61f3-fc3a-43d5-b5d0-c6226de985ea] Running
	I1027 20:04:55.774707  479026 retry.go:31] will retry after 2.083756924s: missing components: kube-dns
	I1027 20:04:57.863666  479026 system_pods.go:86] 7 kube-system pods found
	I1027 20:04:57.863700  479026 system_pods.go:89] "coredns-66bc5c9577-kljgr" [4fb7da7d-dbef-4a92-b6f8-ec55ae59a2c6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 20:04:57.863708  479026 system_pods.go:89] "etcd-custom-flannel-750423" [c419de1f-46cc-4907-be43-19a5e64391a6] Running
	I1027 20:04:57.863715  479026 system_pods.go:89] "kube-apiserver-custom-flannel-750423" [de4fdabb-a7f5-478f-9df2-69404d40c2a8] Running
	I1027 20:04:57.863720  479026 system_pods.go:89] "kube-controller-manager-custom-flannel-750423" [66075651-08ba-4a35-b9c2-5e7d904dbbc9] Running
	I1027 20:04:57.863724  479026 system_pods.go:89] "kube-proxy-twn9b" [08589bb9-6320-42da-ad26-0245d9f69104] Running
	I1027 20:04:57.863730  479026 system_pods.go:89] "kube-scheduler-custom-flannel-750423" [7d421f7e-f3ce-4bf1-a88e-614ed0ea606e] Running
	I1027 20:04:57.863740  479026 system_pods.go:89] "storage-provisioner" [23ed61f3-fc3a-43d5-b5d0-c6226de985ea] Running
	I1027 20:04:57.863755  479026 retry.go:31] will retry after 2.927027278s: missing components: kube-dns
	I1027 20:05:00.795679  479026 system_pods.go:86] 7 kube-system pods found
	I1027 20:05:00.795707  479026 system_pods.go:89] "coredns-66bc5c9577-kljgr" [4fb7da7d-dbef-4a92-b6f8-ec55ae59a2c6] Running
	I1027 20:05:00.795714  479026 system_pods.go:89] "etcd-custom-flannel-750423" [c419de1f-46cc-4907-be43-19a5e64391a6] Running
	I1027 20:05:00.795719  479026 system_pods.go:89] "kube-apiserver-custom-flannel-750423" [de4fdabb-a7f5-478f-9df2-69404d40c2a8] Running
	I1027 20:05:00.795724  479026 system_pods.go:89] "kube-controller-manager-custom-flannel-750423" [66075651-08ba-4a35-b9c2-5e7d904dbbc9] Running
	I1027 20:05:00.795728  479026 system_pods.go:89] "kube-proxy-twn9b" [08589bb9-6320-42da-ad26-0245d9f69104] Running
	I1027 20:05:00.795732  479026 system_pods.go:89] "kube-scheduler-custom-flannel-750423" [7d421f7e-f3ce-4bf1-a88e-614ed0ea606e] Running
	I1027 20:05:00.795736  479026 system_pods.go:89] "storage-provisioner" [23ed61f3-fc3a-43d5-b5d0-c6226de985ea] Running
	I1027 20:05:00.795743  479026 system_pods.go:126] duration metric: took 13.912156699s to wait for k8s-apps to be running ...
	I1027 20:05:00.795750  479026 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 20:05:00.795806  479026 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 20:05:00.814074  479026 system_svc.go:56] duration metric: took 18.314082ms WaitForService to wait for kubelet
	I1027 20:05:00.814102  479026 kubeadm.go:586] duration metric: took 19.892487349s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 20:05:00.814121  479026 node_conditions.go:102] verifying NodePressure condition ...
	I1027 20:05:00.817379  479026 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1027 20:05:00.817412  479026 node_conditions.go:123] node cpu capacity is 2
	I1027 20:05:00.817428  479026 node_conditions.go:105] duration metric: took 3.30079ms to run NodePressure ...
	I1027 20:05:00.817440  479026 start.go:241] waiting for startup goroutines ...
	I1027 20:05:00.817448  479026 start.go:246] waiting for cluster config update ...
	I1027 20:05:00.817464  479026 start.go:255] writing updated cluster config ...
	I1027 20:05:00.817745  479026 ssh_runner.go:195] Run: rm -f paused
	I1027 20:05:00.823914  479026 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 20:05:00.827695  479026 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kljgr" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:05:00.832860  479026 pod_ready.go:94] pod "coredns-66bc5c9577-kljgr" is "Ready"
	I1027 20:05:00.832934  479026 pod_ready.go:86] duration metric: took 5.170752ms for pod "coredns-66bc5c9577-kljgr" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:05:00.835186  479026 pod_ready.go:83] waiting for pod "etcd-custom-flannel-750423" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:05:00.839919  479026 pod_ready.go:94] pod "etcd-custom-flannel-750423" is "Ready"
	I1027 20:05:00.839948  479026 pod_ready.go:86] duration metric: took 4.697748ms for pod "etcd-custom-flannel-750423" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:05:00.842655  479026 pod_ready.go:83] waiting for pod "kube-apiserver-custom-flannel-750423" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:05:00.848454  479026 pod_ready.go:94] pod "kube-apiserver-custom-flannel-750423" is "Ready"
	I1027 20:05:00.848496  479026 pod_ready.go:86] duration metric: took 5.8124ms for pod "kube-apiserver-custom-flannel-750423" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:05:00.851473  479026 pod_ready.go:83] waiting for pod "kube-controller-manager-custom-flannel-750423" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:05:01.233653  479026 pod_ready.go:94] pod "kube-controller-manager-custom-flannel-750423" is "Ready"
	I1027 20:05:01.233683  479026 pod_ready.go:86] duration metric: took 382.183783ms for pod "kube-controller-manager-custom-flannel-750423" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:05:01.428434  479026 pod_ready.go:83] waiting for pod "kube-proxy-twn9b" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:05:01.828400  479026 pod_ready.go:94] pod "kube-proxy-twn9b" is "Ready"
	I1027 20:05:01.828433  479026 pod_ready.go:86] duration metric: took 399.971127ms for pod "kube-proxy-twn9b" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:05:02.028706  479026 pod_ready.go:83] waiting for pod "kube-scheduler-custom-flannel-750423" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:05:02.427524  479026 pod_ready.go:94] pod "kube-scheduler-custom-flannel-750423" is "Ready"
	I1027 20:05:02.427547  479026 pod_ready.go:86] duration metric: took 398.810666ms for pod "kube-scheduler-custom-flannel-750423" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 20:05:02.427559  479026 pod_ready.go:40] duration metric: took 1.60360848s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 20:05:02.530565  479026 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1027 20:05:02.534004  479026 out.go:179] * Done! kubectl is now configured to use "custom-flannel-750423" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 27 20:04:37 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:37.297475386Z" level=info msg="Removing container: 3b21c5ea313fed73e770889f8361d63d255fa877d5227dbb3a02f14257e78c0d" id=1a956f45-90c4-4ad1-b6ac-9bea2385a71d name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 20:04:37 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:37.303470796Z" level=info msg="Error loading conmon cgroup of container 3b21c5ea313fed73e770889f8361d63d255fa877d5227dbb3a02f14257e78c0d: cgroup deleted" id=1a956f45-90c4-4ad1-b6ac-9bea2385a71d name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 20:04:37 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:37.306428695Z" level=info msg="Removed container 3b21c5ea313fed73e770889f8361d63d255fa877d5227dbb3a02f14257e78c0d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4kn22/dashboard-metrics-scraper" id=1a956f45-90c4-4ad1-b6ac-9bea2385a71d name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 20:04:43 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:43.164208804Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 20:04:43 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:43.167319347Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 20:04:43 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:43.167353849Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 20:04:43 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:43.167375518Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 20:04:43 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:43.170838297Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 20:04:43 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:43.170870017Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 20:04:43 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:43.170893245Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 20:04:43 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:43.17382959Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 20:04:43 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:43.173863255Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 20:04:43 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:43.173885982Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 27 20:04:43 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:43.177234673Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 20:04:43 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:43.177265515Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 20:04:57 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:57.97843853Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a55cd66f-bfac-4c41-b991-491347f70fe5 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 20:04:57 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:57.98471998Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=dea449c8-60a2-496b-a424-a641307cf19d name=/runtime.v1.ImageService/ImageStatus
	Oct 27 20:04:57 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:57.986942982Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4kn22/dashboard-metrics-scraper" id=f06b8d50-e846-4d37-ba54-2ed912d5a048 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 20:04:57 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:57.987099187Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:04:57 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:57.994734191Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:04:57 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:57.995599028Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 20:04:58 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:58.017851553Z" level=info msg="Created container a1c86c4e14c18a2f8229ec50c7b75a1fc3ad0e89f2e341db55d3435d58734e1e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4kn22/dashboard-metrics-scraper" id=f06b8d50-e846-4d37-ba54-2ed912d5a048 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 20:04:58 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:58.025895652Z" level=info msg="Starting container: a1c86c4e14c18a2f8229ec50c7b75a1fc3ad0e89f2e341db55d3435d58734e1e" id=31ed0932-903d-4755-b574-495bb22f1a4b name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 20:04:58 default-k8s-diff-port-073048 crio[648]: time="2025-10-27T20:04:58.031599387Z" level=info msg="Started container" PID=1762 containerID=a1c86c4e14c18a2f8229ec50c7b75a1fc3ad0e89f2e341db55d3435d58734e1e description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4kn22/dashboard-metrics-scraper id=31ed0932-903d-4755-b574-495bb22f1a4b name=/runtime.v1.RuntimeService/StartContainer sandboxID=ee91aac62a0ff5691bad9ffea37ad3246623795556f789f8dc1af53258b47fc3
	Oct 27 20:04:58 default-k8s-diff-port-073048 conmon[1760]: conmon a1c86c4e14c18a2f8229 <ninfo>: container 1762 exited with status 1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	a1c86c4e14c18       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           5 seconds ago        Exited              dashboard-metrics-scraper   3                   ee91aac62a0ff       dashboard-metrics-scraper-6ffb444bf9-4kn22             kubernetes-dashboard
	2115489a06bb5       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago       Exited              dashboard-metrics-scraper   2                   ee91aac62a0ff       dashboard-metrics-scraper-6ffb444bf9-4kn22             kubernetes-dashboard
	ca3df37f2ff7a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           29 seconds ago       Running             storage-provisioner         2                   8001cdfcfbf04       storage-provisioner                                    kube-system
	7c44bc52e5ff9       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   50 seconds ago       Running             kubernetes-dashboard        0                   44933a58a5c4b       kubernetes-dashboard-855c9754f9-lrj9p                  kubernetes-dashboard
	483b35ee60722       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           About a minute ago   Running             busybox                     1                   db2f1218c5fa9       busybox                                                default
	33d4c8937c642       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           About a minute ago   Running             coredns                     1                   8f708a3bb8063       coredns-66bc5c9577-6vc9v                               kube-system
	392cb2b4d36cd       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           About a minute ago   Exited              storage-provisioner         1                   8001cdfcfbf04       storage-provisioner                                    kube-system
	4a9c8f23bb639       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           About a minute ago   Running             kindnet-cni                 1                   3e6b761796c6c       kindnet-qc8zw                                          kube-system
	ba49fcd5f05b8       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           About a minute ago   Running             kube-proxy                  1                   1376f5202b3ec       kube-proxy-dsq46                                       kube-system
	ee7be1ab30d5b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   26afeb77d73fc       etcd-default-k8s-diff-port-073048                      kube-system
	420cfe1b91ebd       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   bd3b380a662fa       kube-scheduler-default-k8s-diff-port-073048            kube-system
	70203b34337df       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   b7c87806a5193       kube-controller-manager-default-k8s-diff-port-073048   kube-system
	47af99655b9f0       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   b0787327c5064       kube-apiserver-default-k8s-diff-port-073048            kube-system
	
	
	==> coredns [33d4c8937c642e9e870f4db040cb94a4ed803df3edc604abb12f7c56ef7a0d44] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35892 - 53406 "HINFO IN 3675442782115736736.5461005420113912273. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.004851926s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-073048
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-073048
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=default-k8s-diff-port-073048
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T20_02_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 20:02:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-073048
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 20:04:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 20:04:51 +0000   Mon, 27 Oct 2025 20:02:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 20:04:51 +0000   Mon, 27 Oct 2025 20:02:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 20:04:51 +0000   Mon, 27 Oct 2025 20:02:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 20:04:51 +0000   Mon, 27 Oct 2025 20:03:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-073048
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                dd93b306-3965-477c-8572-564479b43098
	  Boot ID:                    23ea05b4-8203-4fb9-a84a-5deb4b091cbb
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 coredns-66bc5c9577-6vc9v                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m30s
	  kube-system                 etcd-default-k8s-diff-port-073048                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m36s
	  kube-system                 kindnet-qc8zw                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m31s
	  kube-system                 kube-apiserver-default-k8s-diff-port-073048             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m36s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-073048    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m36s
	  kube-system                 kube-proxy-dsq46                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 kube-scheduler-default-k8s-diff-port-073048             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m36s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-4kn22              0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-lrj9p                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m28s              kube-proxy       
	  Normal   Starting                 60s                kube-proxy       
	  Normal   NodeHasSufficientPID     2m36s              kubelet          Node default-k8s-diff-port-073048 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 2m36s              kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m36s              kubelet          Node default-k8s-diff-port-073048 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m36s              kubelet          Node default-k8s-diff-port-073048 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 2m36s              kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m31s              node-controller  Node default-k8s-diff-port-073048 event: Registered Node default-k8s-diff-port-073048 in Controller
	  Normal   NodeReady                109s               kubelet          Node default-k8s-diff-port-073048 status is now: NodeReady
	  Normal   Starting                 73s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 73s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  72s (x8 over 73s)  kubelet          Node default-k8s-diff-port-073048 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    72s (x8 over 73s)  kubelet          Node default-k8s-diff-port-073048 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     72s (x8 over 73s)  kubelet          Node default-k8s-diff-port-073048 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           59s                node-controller  Node default-k8s-diff-port-073048 event: Registered Node default-k8s-diff-port-073048 in Controller
	
	
	==> dmesg <==
	[Oct27 19:41] overlayfs: idmapped layers are currently not supported
	[ +28.455216] overlayfs: idmapped layers are currently not supported
	[Oct27 19:42] overlayfs: idmapped layers are currently not supported
	[ +26.490174] overlayfs: idmapped layers are currently not supported
	[Oct27 19:44] overlayfs: idmapped layers are currently not supported
	[Oct27 19:45] overlayfs: idmapped layers are currently not supported
	[Oct27 19:47] overlayfs: idmapped layers are currently not supported
	[Oct27 19:49] overlayfs: idmapped layers are currently not supported
	[ +31.410335] overlayfs: idmapped layers are currently not supported
	[Oct27 19:51] overlayfs: idmapped layers are currently not supported
	[Oct27 19:53] overlayfs: idmapped layers are currently not supported
	[Oct27 19:54] overlayfs: idmapped layers are currently not supported
	[Oct27 19:55] overlayfs: idmapped layers are currently not supported
	[Oct27 19:56] overlayfs: idmapped layers are currently not supported
	[Oct27 19:57] overlayfs: idmapped layers are currently not supported
	[Oct27 19:58] overlayfs: idmapped layers are currently not supported
	[Oct27 19:59] overlayfs: idmapped layers are currently not supported
	[Oct27 20:00] overlayfs: idmapped layers are currently not supported
	[ +41.321877] overlayfs: idmapped layers are currently not supported
	[Oct27 20:01] overlayfs: idmapped layers are currently not supported
	[Oct27 20:02] overlayfs: idmapped layers are currently not supported
	[Oct27 20:03] overlayfs: idmapped layers are currently not supported
	[ +26.735505] overlayfs: idmapped layers are currently not supported
	[ +12.481352] overlayfs: idmapped layers are currently not supported
	[Oct27 20:04] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ee7be1ab30d5b150cf1282ac50a0ab38c89e1cf1e5e6081e3e51571014d16a0b] <==
	{"level":"warn","ts":"2025-10-27T20:03:57.654295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:57.707386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:57.707875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:57.721435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:57.740762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:57.777912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:57.794528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:57.831734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:57.898749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:57.909436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:57.942185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:57.989193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:58.023318Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:58.040564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:58.064360Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:58.098260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:58.132719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:58.149027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:58.194863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:58.227946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:58.271334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:58.289466Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:58.305956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:58.392069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T20:03:58.525361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57462","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:05:04 up  2:47,  0 user,  load average: 4.37, 3.64, 2.94
	Linux default-k8s-diff-port-073048 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4a9c8f23bb6399f3527d0263bd18f2693466d4304ee4f8059f1eb907bc160eab] <==
	I1027 20:04:02.919605       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 20:04:02.919872       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1027 20:04:02.920005       1 main.go:148] setting mtu 1500 for CNI 
	I1027 20:04:02.920018       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 20:04:02.920027       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T20:04:03Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 20:04:03.174894       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 20:04:03.175027       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 20:04:03.175081       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 20:04:03.175997       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1027 20:04:33.164793       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1027 20:04:33.176344       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1027 20:04:33.176441       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1027 20:04:33.176355       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1027 20:04:34.475663       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 20:04:34.475786       1 metrics.go:72] Registering metrics
	I1027 20:04:34.475891       1 controller.go:711] "Syncing nftables rules"
	I1027 20:04:43.163918       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 20:04:43.163976       1 main.go:301] handling current node
	I1027 20:04:53.164563       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 20:04:53.164599       1 main.go:301] handling current node
	I1027 20:05:03.164584       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 20:05:03.164623       1 main.go:301] handling current node
	
	
	==> kube-apiserver [47af99655b9f01b93fb734ba61f2da0603ae3f7fa9c76d3c797aeb6e931e722d] <==
	I1027 20:04:00.384741       1 autoregister_controller.go:144] Starting autoregister controller
	I1027 20:04:00.384750       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 20:04:00.415202       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 20:04:00.415692       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 20:04:00.432680       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1027 20:04:00.432712       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1027 20:04:00.444821       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1027 20:04:00.451509       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1027 20:04:00.451658       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1027 20:04:00.451902       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1027 20:04:00.475901       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1027 20:04:00.481162       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 20:04:00.490891       1 cache.go:39] Caches are synced for autoregister controller
	E1027 20:04:00.518631       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1027 20:04:00.571892       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 20:04:01.674588       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 20:04:01.957106       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 20:04:02.160333       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 20:04:02.337871       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 20:04:02.413553       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 20:04:03.013276       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.187.110"}
	I1027 20:04:03.111888       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.185.126"}
	I1027 20:04:05.575426       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 20:04:05.624420       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 20:04:05.676035       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [70203b34337df7979e43ac6fb0f4905a8cdc06feeb089b41d83af7060159d8da] <==
	I1027 20:04:05.173637       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1027 20:04:05.174030       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1027 20:04:05.176654       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1027 20:04:05.176762       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1027 20:04:05.176815       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1027 20:04:05.176987       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 20:04:05.182465       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 20:04:05.190779       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 20:04:05.196300       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 20:04:05.204277       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1027 20:04:05.208501       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 20:04:05.211749       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1027 20:04:05.215196       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1027 20:04:05.218247       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 20:04:05.219463       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 20:04:05.219514       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1027 20:04:05.219536       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1027 20:04:05.219682       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 20:04:05.221901       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1027 20:04:05.226172       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1027 20:04:05.232589       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 20:04:05.232741       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 20:04:05.241003       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 20:04:05.241030       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 20:04:05.241038       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [ba49fcd5f05b897c504ab81db54a96c17c13d29d0b5bac3058cf7c87bc70aa26] <==
	I1027 20:04:03.444442       1 server_linux.go:53] "Using iptables proxy"
	I1027 20:04:03.537249       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 20:04:03.638206       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 20:04:03.638251       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1027 20:04:03.638337       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 20:04:03.891399       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 20:04:03.891512       1 server_linux.go:132] "Using iptables Proxier"
	I1027 20:04:03.895804       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 20:04:03.896186       1 server.go:527] "Version info" version="v1.34.1"
	I1027 20:04:03.896657       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 20:04:03.898119       1 config.go:200] "Starting service config controller"
	I1027 20:04:03.898183       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 20:04:03.898223       1 config.go:106] "Starting endpoint slice config controller"
	I1027 20:04:03.898257       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 20:04:03.898295       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 20:04:03.898321       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 20:04:03.899055       1 config.go:309] "Starting node config controller"
	I1027 20:04:03.899103       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 20:04:03.899133       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 20:04:03.999175       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 20:04:03.999243       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 20:04:03.999257       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [420cfe1b91ebd7a517df417e1a1f173e78e7e81a49b61d73bcec204c005ecf7c] <==
	I1027 20:03:59.015461       1 serving.go:386] Generated self-signed cert in-memory
	I1027 20:04:03.353292       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 20:04:03.353334       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 20:04:03.358024       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1027 20:04:03.358132       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1027 20:04:03.358215       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 20:04:03.358447       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 20:04:03.358519       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 20:04:03.358553       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 20:04:03.367598       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 20:04:03.367723       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 20:04:03.459292       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 20:04:03.459419       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1027 20:04:03.459537       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 20:04:06 default-k8s-diff-port-073048 kubelet[774]: I1027 20:04:06.006152     774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6aa8c881-b779-41da-a7c1-29defdba0f2c-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-4kn22\" (UID: \"6aa8c881-b779-41da-a7c1-29defdba0f2c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4kn22"
	Oct 27 20:04:06 default-k8s-diff-port-073048 kubelet[774]: I1027 20:04:06.006216     774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqvf4\" (UniqueName: \"kubernetes.io/projected/a3673ae3-9469-4d0e-9186-0b159e83baa7-kube-api-access-bqvf4\") pod \"kubernetes-dashboard-855c9754f9-lrj9p\" (UID: \"a3673ae3-9469-4d0e-9186-0b159e83baa7\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lrj9p"
	Oct 27 20:04:06 default-k8s-diff-port-073048 kubelet[774]: I1027 20:04:06.006243     774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a3673ae3-9469-4d0e-9186-0b159e83baa7-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-lrj9p\" (UID: \"a3673ae3-9469-4d0e-9186-0b159e83baa7\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lrj9p"
	Oct 27 20:04:06 default-k8s-diff-port-073048 kubelet[774]: I1027 20:04:06.006267     774 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89xqd\" (UniqueName: \"kubernetes.io/projected/6aa8c881-b779-41da-a7c1-29defdba0f2c-kube-api-access-89xqd\") pod \"dashboard-metrics-scraper-6ffb444bf9-4kn22\" (UID: \"6aa8c881-b779-41da-a7c1-29defdba0f2c\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4kn22"
	Oct 27 20:04:06 default-k8s-diff-port-073048 kubelet[774]: W1027 20:04:06.279086     774 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0d0a6d2c139cfccb44f717c1d4bf1de32b26fb3f98fc7320c9146a486f5ddfbb/crio-44933a58a5c4b385161f99b706b221baf68307467a5fa4c7b89172aecce63993 WatchSource:0}: Error finding container 44933a58a5c4b385161f99b706b221baf68307467a5fa4c7b89172aecce63993: Status 404 returned error can't find the container with id 44933a58a5c4b385161f99b706b221baf68307467a5fa4c7b89172aecce63993
	Oct 27 20:04:19 default-k8s-diff-port-073048 kubelet[774]: I1027 20:04:19.226558     774 scope.go:117] "RemoveContainer" containerID="c5af2f2ce8bb0096770169b94c87cc475253590437e67349aba60a996b9327a3"
	Oct 27 20:04:19 default-k8s-diff-port-073048 kubelet[774]: I1027 20:04:19.246653     774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-lrj9p" podStartSLOduration=7.555404663 podStartE2EDuration="14.24663376s" podCreationTimestamp="2025-10-27 20:04:05 +0000 UTC" firstStartedPulling="2025-10-27 20:04:06.282679865 +0000 UTC m=+14.623201146" lastFinishedPulling="2025-10-27 20:04:12.97390897 +0000 UTC m=+21.314430243" observedRunningTime="2025-10-27 20:04:13.216595021 +0000 UTC m=+21.557116293" watchObservedRunningTime="2025-10-27 20:04:19.24663376 +0000 UTC m=+27.587155033"
	Oct 27 20:04:20 default-k8s-diff-port-073048 kubelet[774]: I1027 20:04:20.230453     774 scope.go:117] "RemoveContainer" containerID="c5af2f2ce8bb0096770169b94c87cc475253590437e67349aba60a996b9327a3"
	Oct 27 20:04:20 default-k8s-diff-port-073048 kubelet[774]: I1027 20:04:20.231283     774 scope.go:117] "RemoveContainer" containerID="3b21c5ea313fed73e770889f8361d63d255fa877d5227dbb3a02f14257e78c0d"
	Oct 27 20:04:20 default-k8s-diff-port-073048 kubelet[774]: E1027 20:04:20.231516     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4kn22_kubernetes-dashboard(6aa8c881-b779-41da-a7c1-29defdba0f2c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4kn22" podUID="6aa8c881-b779-41da-a7c1-29defdba0f2c"
	Oct 27 20:04:21 default-k8s-diff-port-073048 kubelet[774]: I1027 20:04:21.235229     774 scope.go:117] "RemoveContainer" containerID="3b21c5ea313fed73e770889f8361d63d255fa877d5227dbb3a02f14257e78c0d"
	Oct 27 20:04:21 default-k8s-diff-port-073048 kubelet[774]: E1027 20:04:21.235409     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4kn22_kubernetes-dashboard(6aa8c881-b779-41da-a7c1-29defdba0f2c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4kn22" podUID="6aa8c881-b779-41da-a7c1-29defdba0f2c"
	Oct 27 20:04:26 default-k8s-diff-port-073048 kubelet[774]: I1027 20:04:26.192113     774 scope.go:117] "RemoveContainer" containerID="3b21c5ea313fed73e770889f8361d63d255fa877d5227dbb3a02f14257e78c0d"
	Oct 27 20:04:26 default-k8s-diff-port-073048 kubelet[774]: E1027 20:04:26.192307     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4kn22_kubernetes-dashboard(6aa8c881-b779-41da-a7c1-29defdba0f2c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4kn22" podUID="6aa8c881-b779-41da-a7c1-29defdba0f2c"
	Oct 27 20:04:34 default-k8s-diff-port-073048 kubelet[774]: I1027 20:04:34.268086     774 scope.go:117] "RemoveContainer" containerID="392cb2b4d36cd351dcd3237b475b909427deb573bd6d67128500918d61224f5a"
	Oct 27 20:04:36 default-k8s-diff-port-073048 kubelet[774]: I1027 20:04:36.977388     774 scope.go:117] "RemoveContainer" containerID="3b21c5ea313fed73e770889f8361d63d255fa877d5227dbb3a02f14257e78c0d"
	Oct 27 20:04:37 default-k8s-diff-port-073048 kubelet[774]: I1027 20:04:37.287815     774 scope.go:117] "RemoveContainer" containerID="3b21c5ea313fed73e770889f8361d63d255fa877d5227dbb3a02f14257e78c0d"
	Oct 27 20:04:37 default-k8s-diff-port-073048 kubelet[774]: I1027 20:04:37.288115     774 scope.go:117] "RemoveContainer" containerID="2115489a06bb5e5da51f7bed723595a80cff36c03176614c6ce4478ce4e468e7"
	Oct 27 20:04:37 default-k8s-diff-port-073048 kubelet[774]: E1027 20:04:37.288270     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4kn22_kubernetes-dashboard(6aa8c881-b779-41da-a7c1-29defdba0f2c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4kn22" podUID="6aa8c881-b779-41da-a7c1-29defdba0f2c"
	Oct 27 20:04:46 default-k8s-diff-port-073048 kubelet[774]: I1027 20:04:46.193733     774 scope.go:117] "RemoveContainer" containerID="2115489a06bb5e5da51f7bed723595a80cff36c03176614c6ce4478ce4e468e7"
	Oct 27 20:04:46 default-k8s-diff-port-073048 kubelet[774]: E1027 20:04:46.193967     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4kn22_kubernetes-dashboard(6aa8c881-b779-41da-a7c1-29defdba0f2c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4kn22" podUID="6aa8c881-b779-41da-a7c1-29defdba0f2c"
	Oct 27 20:04:57 default-k8s-diff-port-073048 kubelet[774]: I1027 20:04:57.977370     774 scope.go:117] "RemoveContainer" containerID="2115489a06bb5e5da51f7bed723595a80cff36c03176614c6ce4478ce4e468e7"
	Oct 27 20:04:58 default-k8s-diff-port-073048 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 20:04:58 default-k8s-diff-port-073048 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 20:04:58 default-k8s-diff-port-073048 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [7c44bc52e5ff96e93a4c96064dd09128b39f114debcf592f489e9ef5f042766b] <==
	2025/10/27 20:04:13 Using namespace: kubernetes-dashboard
	2025/10/27 20:04:13 Using in-cluster config to connect to apiserver
	2025/10/27 20:04:13 Using secret token for csrf signing
	2025/10/27 20:04:13 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/27 20:04:13 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/27 20:04:13 Successful initial request to the apiserver, version: v1.34.1
	2025/10/27 20:04:13 Generating JWE encryption key
	2025/10/27 20:04:13 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/27 20:04:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/27 20:04:13 Initializing JWE encryption key from synchronized object
	2025/10/27 20:04:13 Creating in-cluster Sidecar client
	2025/10/27 20:04:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 20:04:13 Serving insecurely on HTTP port: 9090
	2025/10/27 20:04:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 20:04:13 Starting overwatch
	
	
	==> storage-provisioner [392cb2b4d36cd351dcd3237b475b909427deb573bd6d67128500918d61224f5a] <==
	I1027 20:04:03.264465       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1027 20:04:33.267670       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ca3df37f2ff7a2e1fc49a86c023182e51049920e03c757f37b9467c42e204794] <==
	W1027 20:04:34.374453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:04:37.832498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:04:42.093733       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:04:45.693240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:04:48.748710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:04:51.773292       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:04:51.779755       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 20:04:51.779905       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 20:04:51.780069       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-073048_99b8e111-e31f-44bd-9d1a-b5e449d12627!
	I1027 20:04:51.780207       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"19a71778-5006-4e48-afac-9e5dd7131511", APIVersion:"v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-073048_99b8e111-e31f-44bd-9d1a-b5e449d12627 became leader
	W1027 20:04:51.782506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:04:51.796813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 20:04:51.883343       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-073048_99b8e111-e31f-44bd-9d1a-b5e449d12627!
	W1027 20:04:53.799999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:04:53.804391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:04:55.807486       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:04:55.811995       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:04:57.815628       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:04:57.822923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:04:59.826538       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:04:59.835268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:05:01.838808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:05:01.845307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:05:03.849207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 20:05:03.857674       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-073048 -n default-k8s-diff-port-073048
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-073048 -n default-k8s-diff-port-073048: exit status 2 (526.496577ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-073048 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (8.03s)
E1027 20:11:03.119478  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (260/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.45
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 5.7
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.1
18 TestDownloadOnly/v1.34.1/DeleteAll 0.21
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.59
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 172.89
31 TestAddons/serial/GCPAuth/Namespaces 0.21
32 TestAddons/serial/GCPAuth/FakeCredentials 10.86
48 TestAddons/StoppedEnableDisable 12.44
49 TestCertOptions 36.02
50 TestCertExpiration 236.45
52 TestForceSystemdFlag 44.24
53 TestForceSystemdEnv 39.03
58 TestErrorSpam/setup 36.51
59 TestErrorSpam/start 0.82
60 TestErrorSpam/status 1.25
61 TestErrorSpam/pause 6.02
62 TestErrorSpam/unpause 5.69
63 TestErrorSpam/stop 1.52
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 78.75
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 29.29
70 TestFunctional/serial/KubeContext 0.07
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.48
75 TestFunctional/serial/CacheCmd/cache/add_local 1.06
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.82
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 36.81
84 TestFunctional/serial/ComponentHealth 0.09
85 TestFunctional/serial/LogsCmd 1.46
86 TestFunctional/serial/LogsFileCmd 1.52
87 TestFunctional/serial/InvalidService 4.74
89 TestFunctional/parallel/ConfigCmd 0.54
90 TestFunctional/parallel/DashboardCmd 7.55
91 TestFunctional/parallel/DryRun 0.5
92 TestFunctional/parallel/InternationalLanguage 0.23
93 TestFunctional/parallel/StatusCmd 1.03
98 TestFunctional/parallel/AddonsCmd 0.16
99 TestFunctional/parallel/PersistentVolumeClaim 26.67
101 TestFunctional/parallel/SSHCmd 0.54
102 TestFunctional/parallel/CpCmd 2.13
104 TestFunctional/parallel/FileSync 0.34
105 TestFunctional/parallel/CertSync 2.31
109 TestFunctional/parallel/NodeLabels 0.12
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.8
113 TestFunctional/parallel/License 0.42
114 TestFunctional/parallel/Version/short 0.05
115 TestFunctional/parallel/Version/components 1.16
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
120 TestFunctional/parallel/ImageCommands/ImageBuild 3.9
121 TestFunctional/parallel/ImageCommands/Setup 0.66
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
130 TestFunctional/parallel/ImageCommands/ImageRemove 0.51
134 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.53
135 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.32
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
144 TestFunctional/parallel/ServiceCmd/List 0.51
145 TestFunctional/parallel/ServiceCmd/JSONOutput 0.51
149 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
150 TestFunctional/parallel/ProfileCmd/profile_list 0.41
151 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
152 TestFunctional/parallel/MountCmd/any-port 7.42
153 TestFunctional/parallel/MountCmd/specific-port 2.22
154 TestFunctional/parallel/MountCmd/VerifyCleanup 2.03
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.01
162 TestMultiControlPlane/serial/StartCluster 212.07
163 TestMultiControlPlane/serial/DeployApp 7.36
164 TestMultiControlPlane/serial/PingHostFromPods 1.51
165 TestMultiControlPlane/serial/AddWorkerNode 58.54
166 TestMultiControlPlane/serial/NodeLabels 0.12
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.04
168 TestMultiControlPlane/serial/CopyFile 19.76
169 TestMultiControlPlane/serial/StopSecondaryNode 13.05
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.82
171 TestMultiControlPlane/serial/RestartSecondaryNode 27.39
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.22
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 127.4
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.68
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.8
176 TestMultiControlPlane/serial/StopCluster 36.02
177 TestMultiControlPlane/serial/RestartCluster 74.91
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.78
179 TestMultiControlPlane/serial/AddSecondaryNode 92.37
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.4
185 TestJSONOutput/start/Command 79.61
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.8
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 40.98
211 TestKicCustomNetwork/use_default_bridge_network 35.52
212 TestKicExistingNetwork 39.6
213 TestKicCustomSubnet 40.68
214 TestKicStaticIP 35.54
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 74.29
219 TestMountStart/serial/StartWithMountFirst 10.66
220 TestMountStart/serial/VerifyMountFirst 0.28
221 TestMountStart/serial/StartWithMountSecond 9.39
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.72
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.29
226 TestMountStart/serial/RestartStopped 7.76
227 TestMountStart/serial/VerifyMountPostStop 0.28
230 TestMultiNode/serial/FreshStart2Nodes 134.5
231 TestMultiNode/serial/DeployApp2Nodes 5.04
232 TestMultiNode/serial/PingHostFrom2Pods 0.9
233 TestMultiNode/serial/AddNode 58.58
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.71
236 TestMultiNode/serial/CopyFile 10.39
237 TestMultiNode/serial/StopNode 2.42
238 TestMultiNode/serial/StartAfterStop 8.54
239 TestMultiNode/serial/RestartKeepsNodes 78.92
240 TestMultiNode/serial/DeleteNode 5.85
241 TestMultiNode/serial/StopMultiNode 23.99
242 TestMultiNode/serial/RestartMultiNode 50.86
243 TestMultiNode/serial/ValidateNameConflict 35.91
248 TestPreload 158.92
250 TestScheduledStopUnix 109.64
253 TestInsufficientStorage 13.79
254 TestRunningBinaryUpgrade 56.27
256 TestKubernetesUpgrade 450.44
257 TestMissingContainerUpgrade 121.36
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 43.26
261 TestNoKubernetes/serial/StartWithStopK8s 17.38
262 TestNoKubernetes/serial/Start 9.95
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
264 TestNoKubernetes/serial/ProfileList 0.68
265 TestNoKubernetes/serial/Stop 1.28
266 TestNoKubernetes/serial/StartNoArgs 6.71
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
268 TestStoppedBinaryUpgrade/Setup 2.39
269 TestStoppedBinaryUpgrade/Upgrade 56.53
270 TestStoppedBinaryUpgrade/MinikubeLogs 1.14
279 TestPause/serial/Start 84.98
280 TestPause/serial/SecondStartNoReconfiguration 28.84
289 TestNetworkPlugins/group/false 3.59
294 TestStartStop/group/old-k8s-version/serial/FirstStart 62.36
295 TestStartStop/group/old-k8s-version/serial/DeployApp 9.41
297 TestStartStop/group/old-k8s-version/serial/Stop 11.99
298 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
299 TestStartStop/group/old-k8s-version/serial/SecondStart 55.63
301 TestStartStop/group/no-preload/serial/FirstStart 77.03
302 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
303 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.18
304 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.35
307 TestStartStop/group/embed-certs/serial/FirstStart 85.11
308 TestStartStop/group/no-preload/serial/DeployApp 10.39
310 TestStartStop/group/no-preload/serial/Stop 12.03
311 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
312 TestStartStop/group/no-preload/serial/SecondStart 53.54
313 TestStartStop/group/embed-certs/serial/DeployApp 8.33
315 TestStartStop/group/embed-certs/serial/Stop 12.03
316 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
317 TestStartStop/group/embed-certs/serial/SecondStart 54.56
318 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
319 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
320 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.32
323 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 78.61
324 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
325 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
326 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.29
329 TestStartStop/group/newest-cni/serial/FirstStart 41.72
330 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.4
332 TestStartStop/group/newest-cni/serial/DeployApp 0
334 TestStartStop/group/newest-cni/serial/Stop 1.49
335 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.3
336 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.29
337 TestStartStop/group/newest-cni/serial/SecondStart 16.37
338 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.35
339 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 62.72
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.33
344 TestNetworkPlugins/group/custom-flannel/Start 61.88
345 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
346 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.12
347 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
349 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.41
350 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.45
351 TestNetworkPlugins/group/auto/Start 89.83
352 TestNetworkPlugins/group/custom-flannel/DNS 0.2
353 TestNetworkPlugins/group/custom-flannel/Localhost 0.21
354 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
355 TestNetworkPlugins/group/kindnet/Start 83.28
356 TestNetworkPlugins/group/auto/KubeletFlags 0.3
357 TestNetworkPlugins/group/auto/NetCatPod 10.29
358 TestNetworkPlugins/group/auto/DNS 0.17
359 TestNetworkPlugins/group/auto/Localhost 0.14
360 TestNetworkPlugins/group/auto/HairPin 0.17
361 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
362 TestNetworkPlugins/group/kindnet/KubeletFlags 0.46
363 TestNetworkPlugins/group/kindnet/NetCatPod 12.37
364 TestNetworkPlugins/group/flannel/Start 63.44
365 TestNetworkPlugins/group/kindnet/DNS 0.28
366 TestNetworkPlugins/group/kindnet/Localhost 0.18
367 TestNetworkPlugins/group/kindnet/HairPin 0.15
368 TestNetworkPlugins/group/enable-default-cni/Start 74.45
369 TestNetworkPlugins/group/flannel/ControllerPod 6.01
370 TestNetworkPlugins/group/flannel/KubeletFlags 0.48
371 TestNetworkPlugins/group/flannel/NetCatPod 11.38
372 TestNetworkPlugins/group/flannel/DNS 0.15
373 TestNetworkPlugins/group/flannel/Localhost 0.14
374 TestNetworkPlugins/group/flannel/HairPin 0.15
375 TestNetworkPlugins/group/bridge/Start 87.34
376 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
377 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.35
378 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
379 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
380 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
381 TestNetworkPlugins/group/calico/Start 64.12
382 TestNetworkPlugins/group/bridge/KubeletFlags 0.38
383 TestNetworkPlugins/group/bridge/NetCatPod 11.35
384 TestNetworkPlugins/group/bridge/DNS 0.21
385 TestNetworkPlugins/group/bridge/Localhost 0.13
386 TestNetworkPlugins/group/bridge/HairPin 0.14
387 TestNetworkPlugins/group/calico/ControllerPod 6
388 TestNetworkPlugins/group/calico/KubeletFlags 0.38
389 TestNetworkPlugins/group/calico/NetCatPod 12.37
390 TestNetworkPlugins/group/calico/DNS 0.15
391 TestNetworkPlugins/group/calico/Localhost 0.14
392 TestNetworkPlugins/group/calico/HairPin 0.12
x
+
TestDownloadOnly/v1.28.0/json-events (5.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-428457 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-428457 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.450435702s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1027 18:56:43.338285  267880 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1027 18:56:43.338360  267880 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-428457
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-428457: exit status 85 (80.455869ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-428457 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-428457 │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 18:56:37
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 18:56:37.932737  267885 out.go:360] Setting OutFile to fd 1 ...
	I1027 18:56:37.932945  267885 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:56:37.932972  267885 out.go:374] Setting ErrFile to fd 2...
	I1027 18:56:37.932992  267885 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:56:37.933262  267885 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	W1027 18:56:37.933435  267885 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21801-266035/.minikube/config/config.json: open /home/jenkins/minikube-integration/21801-266035/.minikube/config/config.json: no such file or directory
	I1027 18:56:37.933885  267885 out.go:368] Setting JSON to true
	I1027 18:56:37.934747  267885 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5950,"bootTime":1761585448,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1027 18:56:37.934840  267885 start.go:141] virtualization:  
	I1027 18:56:37.938866  267885 out.go:99] [download-only-428457] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1027 18:56:37.939087  267885 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball: no such file or directory
	I1027 18:56:37.939158  267885 notify.go:220] Checking for updates...
	I1027 18:56:37.941903  267885 out.go:171] MINIKUBE_LOCATION=21801
	I1027 18:56:37.944867  267885 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 18:56:37.947775  267885 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 18:56:37.950765  267885 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-266035/.minikube
	I1027 18:56:37.953587  267885 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1027 18:56:37.959144  267885 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1027 18:56:37.959423  267885 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 18:56:37.984218  267885 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 18:56:37.984336  267885 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 18:56:38.045508  267885 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-27 18:56:38.035835338 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 18:56:38.045618  267885 docker.go:318] overlay module found
	I1027 18:56:38.048708  267885 out.go:99] Using the docker driver based on user configuration
	I1027 18:56:38.048757  267885 start.go:305] selected driver: docker
	I1027 18:56:38.048771  267885 start.go:925] validating driver "docker" against <nil>
	I1027 18:56:38.048889  267885 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 18:56:38.107470  267885 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-10-27 18:56:38.098219961 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 18:56:38.107639  267885 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1027 18:56:38.107947  267885 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1027 18:56:38.108114  267885 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1027 18:56:38.111364  267885 out.go:171] Using Docker driver with root privileges
	I1027 18:56:38.114205  267885 cni.go:84] Creating CNI manager for ""
	I1027 18:56:38.114279  267885 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 18:56:38.114292  267885 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 18:56:38.114378  267885 start.go:349] cluster config:
	{Name:download-only-428457 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-428457 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 18:56:38.117352  267885 out.go:99] Starting "download-only-428457" primary control-plane node in "download-only-428457" cluster
	I1027 18:56:38.117376  267885 cache.go:123] Beginning downloading kic base image for docker with crio
	I1027 18:56:38.120328  267885 out.go:99] Pulling base image v0.0.48-1760939008-21773 ...
	I1027 18:56:38.120367  267885 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1027 18:56:38.120561  267885 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 18:56:38.136158  267885 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1027 18:56:38.136361  267885 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1027 18:56:38.136464  267885 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1027 18:56:38.175777  267885 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1027 18:56:38.175804  267885 cache.go:58] Caching tarball of preloaded images
	I1027 18:56:38.175970  267885 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1027 18:56:38.181978  267885 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1027 18:56:38.182007  267885 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1027 18:56:38.325714  267885 preload.go:290] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1027 18:56:38.325843  267885 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1027 18:56:41.598784  267885 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1027 18:56:41.599230  267885 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/download-only-428457/config.json ...
	I1027 18:56:41.599267  267885 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/download-only-428457/config.json: {Name:mk485fad74f287f4d7127903bc49b6cdcb5eeac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:41.599434  267885 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1027 18:56:41.599598  267885 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21801-266035/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-428457 host does not exist
	  To start a cluster, run: "minikube start -p download-only-428457"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-428457
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (5.7s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-632012 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-632012 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.695670689s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (5.70s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1027 18:56:49.464629  267880 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1027 18:56:49.464670  267880 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-632012
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-632012: exit status 85 (100.949146ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-428457 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-428457 │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:56 UTC │
	│ delete  │ -p download-only-428457                                                                                                                                                   │ download-only-428457 │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:56 UTC │
	│ start   │ -o=json --download-only -p download-only-632012 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-632012 │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 18:56:43
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 18:56:43.811074  268083 out.go:360] Setting OutFile to fd 1 ...
	I1027 18:56:43.811189  268083 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:56:43.811199  268083 out.go:374] Setting ErrFile to fd 2...
	I1027 18:56:43.811204  268083 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:56:43.811474  268083 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 18:56:43.811870  268083 out.go:368] Setting JSON to true
	I1027 18:56:43.812689  268083 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":5956,"bootTime":1761585448,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1027 18:56:43.812747  268083 start.go:141] virtualization:  
	I1027 18:56:43.816087  268083 out.go:99] [download-only-632012] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 18:56:43.816283  268083 notify.go:220] Checking for updates...
	I1027 18:56:43.819226  268083 out.go:171] MINIKUBE_LOCATION=21801
	I1027 18:56:43.822185  268083 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 18:56:43.825055  268083 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 18:56:43.828000  268083 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-266035/.minikube
	I1027 18:56:43.830785  268083 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1027 18:56:43.836405  268083 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1027 18:56:43.836696  268083 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 18:56:43.856574  268083 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 18:56:43.856685  268083 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 18:56:43.928897  268083 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-27 18:56:43.919540685 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 18:56:43.929006  268083 docker.go:318] overlay module found
	I1027 18:56:43.932005  268083 out.go:99] Using the docker driver based on user configuration
	I1027 18:56:43.932040  268083 start.go:305] selected driver: docker
	I1027 18:56:43.932052  268083 start.go:925] validating driver "docker" against <nil>
	I1027 18:56:43.932164  268083 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 18:56:43.983398  268083 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-27 18:56:43.974907845 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 18:56:43.983609  268083 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1027 18:56:43.983898  268083 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1027 18:56:43.984045  268083 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1027 18:56:43.987205  268083 out.go:171] Using Docker driver with root privileges
	I1027 18:56:43.989975  268083 cni.go:84] Creating CNI manager for ""
	I1027 18:56:43.990044  268083 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 18:56:43.990059  268083 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 18:56:43.990136  268083 start.go:349] cluster config:
	{Name:download-only-632012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-632012 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 18:56:43.992964  268083 out.go:99] Starting "download-only-632012" primary control-plane node in "download-only-632012" cluster
	I1027 18:56:43.992987  268083 cache.go:123] Beginning downloading kic base image for docker with crio
	I1027 18:56:43.995833  268083 out.go:99] Pulling base image v0.0.48-1760939008-21773 ...
	I1027 18:56:43.995862  268083 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 18:56:43.995971  268083 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 18:56:44.011724  268083 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1027 18:56:44.011865  268083 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1027 18:56:44.011887  268083 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1027 18:56:44.011895  268083 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1027 18:56:44.011902  268083 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1027 18:56:44.054306  268083 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1027 18:56:44.054349  268083 cache.go:58] Caching tarball of preloaded images
	I1027 18:56:44.054578  268083 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 18:56:44.058009  268083 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1027 18:56:44.058127  268083 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1027 18:56:44.147618  268083 preload.go:290] Got checksum from GCS API "bc3e4aa50814345ef9ba3452bb5efb9f"
	I1027 18:56:44.147670  268083 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:bc3e4aa50814345ef9ba3452bb5efb9f -> /home/jenkins/minikube-integration/21801-266035/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1027 18:56:48.723594  268083 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 18:56:48.724003  268083 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/download-only-632012/config.json ...
	I1027 18:56:48.724059  268083 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/download-only-632012/config.json: {Name:mke12b2d283ad46da4f97ce13e769d17888a4535 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:48.724274  268083 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 18:56:48.724459  268083 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21801-266035/.minikube/cache/linux/arm64/v1.34.1/kubectl
	
	
	* The control-plane node download-only-632012 host does not exist
	  To start a cluster, run: "minikube start -p download-only-632012"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-632012
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I1027 18:56:50.630767  267880 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-324835 --alsologtostderr --binary-mirror http://127.0.0.1:42369 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-324835" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-324835
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-101592
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-101592: exit status 85 (80.148621ms)

                                                
                                                
-- stdout --
	* Profile "addons-101592" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-101592"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-101592
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-101592: exit status 85 (67.08615ms)

                                                
                                                
-- stdout --
	* Profile "addons-101592" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-101592"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (172.89s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-101592 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-101592 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m52.889829051s)
--- PASS: TestAddons/Setup (172.89s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-101592 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-101592 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.86s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-101592 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-101592 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e1722866-51ec-40b7-b940-c96c0602e88b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e1722866-51ec-40b7-b940-c96c0602e88b] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.00384918s
addons_test.go:694: (dbg) Run:  kubectl --context addons-101592 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-101592 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-101592 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-101592 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.86s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.44s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-101592
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-101592: (12.160830071s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-101592
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-101592
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-101592
--- PASS: TestAddons/StoppedEnableDisable (12.44s)

                                                
                                    
x
+
TestCertOptions (36.02s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-319273 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E1027 19:56:46.044585  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/functional-647336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-319273 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (33.156329004s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-319273 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-319273 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-319273 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-319273" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-319273
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-319273: (2.104834331s)
--- PASS: TestCertOptions (36.02s)

                                                
                                    
x
+
TestCertExpiration (236.45s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-280013 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-280013 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (33.702242894s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-280013 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-280013 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (19.402779209s)
helpers_test.go:175: Cleaning up "cert-expiration-280013" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-280013
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-280013: (3.349434485s)
--- PASS: TestCertExpiration (236.45s)

                                                
                                    
x
+
TestForceSystemdFlag (44.24s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-769818 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-769818 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (40.816739444s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-769818 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-769818" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-769818
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-769818: (2.89482001s)
--- PASS: TestForceSystemdFlag (44.24s)

                                                
                                    
x
+
TestForceSystemdEnv (39.03s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-105360 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1027 19:54:28.165327  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:54:45.068055  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:54:49.111131  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/functional-647336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-105360 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.496898331s)
helpers_test.go:175: Cleaning up "force-systemd-env-105360" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-105360
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-105360: (2.528011951s)
--- PASS: TestForceSystemdEnv (39.03s)

                                                
                                    
x
+
TestErrorSpam/setup (36.51s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-309835 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-309835 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-309835 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-309835 --driver=docker  --container-runtime=crio: (36.505583098s)
--- PASS: TestErrorSpam/setup (36.51s)

                                                
                                    
x
+
TestErrorSpam/start (0.82s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-309835 --log_dir /tmp/nospam-309835 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-309835 --log_dir /tmp/nospam-309835 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-309835 --log_dir /tmp/nospam-309835 start --dry-run
--- PASS: TestErrorSpam/start (0.82s)

                                                
                                    
x
+
TestErrorSpam/status (1.25s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-309835 --log_dir /tmp/nospam-309835 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-309835 --log_dir /tmp/nospam-309835 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-309835 --log_dir /tmp/nospam-309835 status
--- PASS: TestErrorSpam/status (1.25s)

                                                
                                    
x
+
TestErrorSpam/pause (6.02s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-309835 --log_dir /tmp/nospam-309835 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-309835 --log_dir /tmp/nospam-309835 pause: exit status 80 (2.458105126s)

                                                
                                                
-- stdout --
	* Pausing node nospam-309835 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:03:45Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-309835 --log_dir /tmp/nospam-309835 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-309835 --log_dir /tmp/nospam-309835 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-309835 --log_dir /tmp/nospam-309835 pause: exit status 80 (1.863768193s)

                                                
                                                
-- stdout --
	* Pausing node nospam-309835 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:03:47Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-309835 --log_dir /tmp/nospam-309835 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-309835 --log_dir /tmp/nospam-309835 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-309835 --log_dir /tmp/nospam-309835 pause: exit status 80 (1.699150321s)

                                                
                                                
-- stdout --
	* Pausing node nospam-309835 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:03:48Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-309835 --log_dir /tmp/nospam-309835 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.02s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.69s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-309835 --log_dir /tmp/nospam-309835 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-309835 --log_dir /tmp/nospam-309835 unpause: exit status 80 (2.001001552s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-309835 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:03:50Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-309835 --log_dir /tmp/nospam-309835 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-309835 --log_dir /tmp/nospam-309835 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-309835 --log_dir /tmp/nospam-309835 unpause: exit status 80 (1.876204615s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-309835 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:03:52Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-309835 --log_dir /tmp/nospam-309835 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-309835 --log_dir /tmp/nospam-309835 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-309835 --log_dir /tmp/nospam-309835 unpause: exit status 80 (1.812589317s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-309835 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:03:54Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-309835 --log_dir /tmp/nospam-309835 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.69s)

                                                
                                    
x
+
TestErrorSpam/stop (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-309835 --log_dir /tmp/nospam-309835 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-309835 --log_dir /tmp/nospam-309835 stop: (1.307857495s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-309835 --log_dir /tmp/nospam-309835 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-309835 --log_dir /tmp/nospam-309835 stop
--- PASS: TestErrorSpam/stop (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21801-266035/.minikube/files/etc/test/nested/copy/267880/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (78.75s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-647336 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1027 19:04:45.065712  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:04:45.076358  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:04:45.091675  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:04:45.113638  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:04:45.173969  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:04:45.257680  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:04:45.420251  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:04:45.741949  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:04:46.383725  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:04:47.665713  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:04:50.228404  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:04:55.349928  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:05:05.591933  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-647336 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m18.750094711s)
--- PASS: TestFunctional/serial/StartWithProxy (78.75s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.29s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1027 19:05:19.638529  267880 config.go:182] Loaded profile config "functional-647336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-647336 --alsologtostderr -v=8
E1027 19:05:26.073439  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-647336 --alsologtostderr -v=8: (29.290191683s)
functional_test.go:678: soft start took 29.292120451s for "functional-647336" cluster.
I1027 19:05:48.929001  267880 config.go:182] Loaded profile config "functional-647336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (29.29s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-647336 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-647336 cache add registry.k8s.io/pause:3.1: (1.188663869s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-647336 cache add registry.k8s.io/pause:3.3: (1.192670125s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-647336 cache add registry.k8s.io/pause:latest: (1.094372419s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-647336 /tmp/TestFunctionalserialCacheCmdcacheadd_local93832169/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 cache add minikube-local-cache-test:functional-647336
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 cache delete minikube-local-cache-test:functional-647336
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-647336
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.82s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-647336 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (287.380084ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.82s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 kubectl -- --context functional-647336 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-647336 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.81s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-647336 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1027 19:06:07.036295  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-647336 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.797722042s)
functional_test.go:776: restart took 36.797838385s for "functional-647336" cluster.
I1027 19:06:33.072694  267880 config.go:182] Loaded profile config "functional-647336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (36.81s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-647336 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-647336 logs: (1.460731427s)
--- PASS: TestFunctional/serial/LogsCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 logs --file /tmp/TestFunctionalserialLogsFileCmd1045522503/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-647336 logs --file /tmp/TestFunctionalserialLogsFileCmd1045522503/001/logs.txt: (1.519831443s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.52s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.74s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-647336 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-647336
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-647336: exit status 115 (372.315407ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31688 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-647336 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-647336 delete -f testdata/invalidsvc.yaml: (1.123982774s)
--- PASS: TestFunctional/serial/InvalidService (4.74s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-647336 config get cpus: exit status 14 (77.808151ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-647336 config get cpus: exit status 14 (96.111092ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-647336 --alsologtostderr -v=1]
2025/10/27 19:17:13 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-647336 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 295451: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.55s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-647336 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-647336 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (233.706076ms)

                                                
                                                
-- stdout --
	* [functional-647336] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21801
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21801-266035/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-266035/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:17:05.481277  295157 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:17:05.481481  295157 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:17:05.481513  295157 out.go:374] Setting ErrFile to fd 2...
	I1027 19:17:05.481537  295157 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:17:05.481856  295157 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 19:17:05.482316  295157 out.go:368] Setting JSON to false
	I1027 19:17:05.483324  295157 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":7178,"bootTime":1761585448,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1027 19:17:05.483437  295157 start.go:141] virtualization:  
	I1027 19:17:05.487320  295157 out.go:179] * [functional-647336] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 19:17:05.490462  295157 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 19:17:05.490551  295157 notify.go:220] Checking for updates...
	I1027 19:17:05.496492  295157 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 19:17:05.499563  295157 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 19:17:05.508971  295157 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-266035/.minikube
	I1027 19:17:05.512098  295157 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 19:17:05.514927  295157 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 19:17:05.518368  295157 config.go:182] Loaded profile config "functional-647336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:17:05.519065  295157 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 19:17:05.554356  295157 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 19:17:05.554485  295157 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:17:05.636895  295157 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-27 19:17:05.626743256 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 19:17:05.637008  295157 docker.go:318] overlay module found
	I1027 19:17:05.640118  295157 out.go:179] * Using the docker driver based on existing profile
	I1027 19:17:05.642976  295157 start.go:305] selected driver: docker
	I1027 19:17:05.643116  295157 start.go:925] validating driver "docker" against &{Name:functional-647336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-647336 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:17:05.643231  295157 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 19:17:05.646902  295157 out.go:203] 
	W1027 19:17:05.649951  295157 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1027 19:17:05.652957  295157 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-647336 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-647336 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-647336 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (225.589106ms)

                                                
                                                
-- stdout --
	* [functional-647336] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21801
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21801-266035/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-266035/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:17:05.990636  295276 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:17:05.990881  295276 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:17:05.990911  295276 out.go:374] Setting ErrFile to fd 2...
	I1027 19:17:05.990930  295276 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:17:05.991969  295276 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 19:17:05.992528  295276 out.go:368] Setting JSON to false
	I1027 19:17:05.993597  295276 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":7178,"bootTime":1761585448,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1027 19:17:05.993704  295276 start.go:141] virtualization:  
	I1027 19:17:05.997008  295276 out.go:179] * [functional-647336] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1027 19:17:06.000937  295276 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 19:17:06.001118  295276 notify.go:220] Checking for updates...
	I1027 19:17:06.008468  295276 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 19:17:06.011457  295276 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 19:17:06.014482  295276 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-266035/.minikube
	I1027 19:17:06.018386  295276 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 19:17:06.022470  295276 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 19:17:06.026193  295276 config.go:182] Loaded profile config "functional-647336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:17:06.026886  295276 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 19:17:06.063346  295276 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 19:17:06.063525  295276 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:17:06.131252  295276 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-27 19:17:06.120902628 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 19:17:06.131376  295276 docker.go:318] overlay module found
	I1027 19:17:06.134543  295276 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1027 19:17:06.137481  295276 start.go:305] selected driver: docker
	I1027 19:17:06.137506  295276 start.go:925] validating driver "docker" against &{Name:functional-647336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-647336 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:17:06.137613  295276 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 19:17:06.141330  295276 out.go:203] 
	W1027 19:17:06.144369  295276 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1027 19:17:06.147094  295276 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [f234d77a-3a35-4819-9f92-af0ab74e6165] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003162763s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-647336 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-647336 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-647336 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-647336 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [2a68909d-8df3-417c-9421-4fd864ee8217] Pending
helpers_test.go:352: "sp-pod" [2a68909d-8df3-417c-9421-4fd864ee8217] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [2a68909d-8df3-417c-9421-4fd864ee8217] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003057962s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-647336 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-647336 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-647336 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [559a8478-f8b0-4b83-bf25-6ffc76f88775] Pending
helpers_test.go:352: "sp-pod" [559a8478-f8b0-4b83-bf25-6ffc76f88775] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [559a8478-f8b0-4b83-bf25-6ffc76f88775] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003630187s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-647336 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.67s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 ssh -n functional-647336 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 cp functional-647336:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3480581060/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 ssh -n functional-647336 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 ssh -n functional-647336 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/267880/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 ssh "sudo cat /etc/test/nested/copy/267880/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/267880.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 ssh "sudo cat /etc/ssl/certs/267880.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/267880.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 ssh "sudo cat /usr/share/ca-certificates/267880.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2678802.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 ssh "sudo cat /etc/ssl/certs/2678802.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2678802.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 ssh "sudo cat /usr/share/ca-certificates/2678802.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-647336 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-647336 ssh "sudo systemctl is-active docker": exit status 1 (420.761611ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-647336 ssh "sudo systemctl is-active containerd": exit status 1 (382.384796ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-647336 version -o=json --components: (1.159788029s)
--- PASS: TestFunctional/parallel/Version/components (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-647336 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-647336 image ls --format short --alsologtostderr:
I1027 19:17:15.402914  295819 out.go:360] Setting OutFile to fd 1 ...
I1027 19:17:15.403116  295819 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 19:17:15.403141  295819 out.go:374] Setting ErrFile to fd 2...
I1027 19:17:15.403165  295819 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 19:17:15.403520  295819 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
I1027 19:17:15.404493  295819 config.go:182] Loaded profile config "functional-647336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 19:17:15.404716  295819 config.go:182] Loaded profile config "functional-647336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 19:17:15.405434  295819 cli_runner.go:164] Run: docker container inspect functional-647336 --format={{.State.Status}}
I1027 19:17:15.422902  295819 ssh_runner.go:195] Run: systemctl --version
I1027 19:17:15.422960  295819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-647336
I1027 19:17:15.440267  295819 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/functional-647336/id_rsa Username:docker}
I1027 19:17:15.545560  295819 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-647336 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ docker.io/library/nginx                 │ alpine             │ 9c92f55c0336c │ 54.7MB │
│ docker.io/library/nginx                 │ latest             │ e612b97116b41 │ 176MB  │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ gcr.io/k8s-minikube/busybox             │ latest             │ 71a676dd070f4 │ 1.63MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ localhost/my-image                      │ functional-647336  │ 44014ac47408f │ 1.64MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-647336 image ls --format table --alsologtostderr:
I1027 19:17:19.997467  296289 out.go:360] Setting OutFile to fd 1 ...
I1027 19:17:19.999159  296289 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 19:17:19.999176  296289 out.go:374] Setting ErrFile to fd 2...
I1027 19:17:19.999190  296289 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 19:17:20.000112  296289 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
I1027 19:17:20.000829  296289 config.go:182] Loaded profile config "functional-647336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 19:17:20.001005  296289 config.go:182] Loaded profile config "functional-647336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 19:17:20.001610  296289 cli_runner.go:164] Run: docker container inspect functional-647336 --format={{.State.Status}}
I1027 19:17:20.024066  296289 ssh_runner.go:195] Run: systemctl --version
I1027 19:17:20.024132  296289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-647336
I1027 19:17:20.044142  296289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/functional-647336/id_rsa Username:docker}
I1027 19:17:20.149673  296289 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-647336 image ls --format json --alsologtostderr:
[{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"e612b97116b41d24816faa9fd204e1177027648a2cb14bb627dd1eaab1494e8f","repoDigests":["docker.io/library/nginx@sha256:029d4461bd98f124e531380505ceea2072418fdf28752aa73b7b273ba3048903","docker.io/library/nginx@sha256:68e62e210589c349f01d82308b45fbd6fb9b855f8b12cb27e11ad48dbfd0e43f"],"repoTags":["docker.io/library/nginx:latest"],"size":"176071022"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa49
6b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"4acc596ed7dd1eb0a2448aeb4bc475d2114c61d0a79e588f7a179489ccf4a61e","repoDigests":["docker.io/library/2d34c79ec318154f5eb5ec03145ca35e0e2c6660cb1bae4bba63131b3f6e4392-tmp@sha256:f9f1d1878b22dcd5fc95d799eb74790bcd9b2ddde8807c0b2f5815e79949c588"],"repoTags":[],"size":"1638178"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"44014ac47408f76a816533207c1dfc39ae4a8a95f59979afbd007be457be50a7","repoDigests":["localhost/my-image@sha256:9f40ee8b1fb8edda5ce7aae4c171b58b3815e46eb39f8e9c0ec99281aed4c0a7"],"repo
Tags":["localhost/my-image:functional-647336"],"size":"1640790"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"}
,{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa","repoDigests":["docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0","docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54704654"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTag
s":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","regis
try.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99
250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-647336 image ls --format json --alsologtostderr:
I1027 19:17:19.764091  296253 out.go:360] Setting OutFile to fd 1 ...
I1027 19:17:19.764218  296253 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 19:17:19.764230  296253 out.go:374] Setting ErrFile to fd 2...
I1027 19:17:19.764236  296253 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 19:17:19.764490  296253 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
I1027 19:17:19.765086  296253 config.go:182] Loaded profile config "functional-647336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 19:17:19.765213  296253 config.go:182] Loaded profile config "functional-647336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 19:17:19.765653  296253 cli_runner.go:164] Run: docker container inspect functional-647336 --format={{.State.Status}}
I1027 19:17:19.782596  296253 ssh_runner.go:195] Run: systemctl --version
I1027 19:17:19.782659  296253 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-647336
I1027 19:17:19.801061  296253 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/functional-647336/id_rsa Username:docker}
I1027 19:17:19.905489  296253 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-647336 image ls --format yaml --alsologtostderr:
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa
repoDigests:
- docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0
- docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22
repoTags:
- docker.io/library/nginx:alpine
size: "54704654"
- id: e612b97116b41d24816faa9fd204e1177027648a2cb14bb627dd1eaab1494e8f
repoDigests:
- docker.io/library/nginx@sha256:029d4461bd98f124e531380505ceea2072418fdf28752aa73b7b273ba3048903
- docker.io/library/nginx@sha256:68e62e210589c349f01d82308b45fbd6fb9b855f8b12cb27e11ad48dbfd0e43f
repoTags:
- docker.io/library/nginx:latest
size: "176071022"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-647336 image ls --format yaml --alsologtostderr:
I1027 19:17:15.634112  295856 out.go:360] Setting OutFile to fd 1 ...
I1027 19:17:15.634299  295856 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 19:17:15.634331  295856 out.go:374] Setting ErrFile to fd 2...
I1027 19:17:15.634356  295856 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 19:17:15.634612  295856 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
I1027 19:17:15.635320  295856 config.go:182] Loaded profile config "functional-647336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 19:17:15.635518  295856 config.go:182] Loaded profile config "functional-647336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 19:17:15.635994  295856 cli_runner.go:164] Run: docker container inspect functional-647336 --format={{.State.Status}}
I1027 19:17:15.653710  295856 ssh_runner.go:195] Run: systemctl --version
I1027 19:17:15.653833  295856 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-647336
I1027 19:17:15.671611  295856 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/functional-647336/id_rsa Username:docker}
I1027 19:17:15.773351  295856 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-647336 ssh pgrep buildkitd: exit status 1 (273.575833ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 image build -t localhost/my-image:functional-647336 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-647336 image build -t localhost/my-image:functional-647336 testdata/build --alsologtostderr: (3.38385829s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-647336 image build -t localhost/my-image:functional-647336 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 4acc596ed7d
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-647336
--> 44014ac4740
Successfully tagged localhost/my-image:functional-647336
44014ac47408f76a816533207c1dfc39ae4a8a95f59979afbd007be457be50a7
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-647336 image build -t localhost/my-image:functional-647336 testdata/build --alsologtostderr:
I1027 19:17:16.137220  295955 out.go:360] Setting OutFile to fd 1 ...
I1027 19:17:16.138005  295955 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 19:17:16.138020  295955 out.go:374] Setting ErrFile to fd 2...
I1027 19:17:16.138026  295955 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 19:17:16.138309  295955 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
I1027 19:17:16.138963  295955 config.go:182] Loaded profile config "functional-647336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 19:17:16.140231  295955 config.go:182] Loaded profile config "functional-647336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 19:17:16.140767  295955 cli_runner.go:164] Run: docker container inspect functional-647336 --format={{.State.Status}}
I1027 19:17:16.159344  295955 ssh_runner.go:195] Run: systemctl --version
I1027 19:17:16.159410  295955 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-647336
I1027 19:17:16.178114  295955 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/functional-647336/id_rsa Username:docker}
I1027 19:17:16.281341  295955 build_images.go:161] Building image from path: /tmp/build.3088529587.tar
I1027 19:17:16.281410  295955 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1027 19:17:16.291235  295955 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3088529587.tar
I1027 19:17:16.295781  295955 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3088529587.tar: stat -c "%s %y" /var/lib/minikube/build/build.3088529587.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3088529587.tar': No such file or directory
I1027 19:17:16.295811  295955 ssh_runner.go:362] scp /tmp/build.3088529587.tar --> /var/lib/minikube/build/build.3088529587.tar (3072 bytes)
I1027 19:17:16.317814  295955 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3088529587
I1027 19:17:16.325624  295955 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3088529587 -xf /var/lib/minikube/build/build.3088529587.tar
I1027 19:17:16.333650  295955 crio.go:315] Building image: /var/lib/minikube/build/build.3088529587
I1027 19:17:16.333715  295955 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-647336 /var/lib/minikube/build/build.3088529587 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1027 19:17:19.442698  295955 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-647336 /var/lib/minikube/build/build.3088529587 --cgroup-manager=cgroupfs: (3.108956283s)
I1027 19:17:19.442764  295955 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3088529587
I1027 19:17:19.451493  295955 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3088529587.tar
I1027 19:17:19.459426  295955 build_images.go:217] Built localhost/my-image:functional-647336 from /tmp/build.3088529587.tar
I1027 19:17:19.459456  295955 build_images.go:133] succeeded building to: functional-647336
I1027 19:17:19.459461  295955 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-647336
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 image rm kicbase/echo-server:functional-647336 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-647336 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-647336 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-647336 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-647336 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 291494: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-647336 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-647336 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [4700b728-876d-44bc-941a-fa920b7513f5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [4700b728-876d-44bc-941a-fa920b7513f5] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.00291034s
I1027 19:06:59.762637  267880 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.32s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-647336 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.165.153 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-647336 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 service list -o json
functional_test.go:1504: Took "511.000658ms" to run "out/minikube-linux-arm64 -p functional-647336 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "362.29659ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "51.936546ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "363.645408ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "57.22302ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-647336 /tmp/TestFunctionalparallelMountCmdany-port529485509/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1761592613764196440" to /tmp/TestFunctionalparallelMountCmdany-port529485509/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1761592613764196440" to /tmp/TestFunctionalparallelMountCmdany-port529485509/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1761592613764196440" to /tmp/TestFunctionalparallelMountCmdany-port529485509/001/test-1761592613764196440
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-647336 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (374.093207ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1027 19:16:54.138544  267880 retry.go:31] will retry after 513.871859ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 27 19:16 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 27 19:16 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 27 19:16 test-1761592613764196440
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 ssh cat /mount-9p/test-1761592613764196440
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-647336 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [8d2bbd64-5624-4863-92c7-ff6da8442ce9] Pending
helpers_test.go:352: "busybox-mount" [8d2bbd64-5624-4863-92c7-ff6da8442ce9] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [8d2bbd64-5624-4863-92c7-ff6da8442ce9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [8d2bbd64-5624-4863-92c7-ff6da8442ce9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.002801437s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-647336 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-647336 /tmp/TestFunctionalparallelMountCmdany-port529485509/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-647336 /tmp/TestFunctionalparallelMountCmdspecific-port807785502/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-647336 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (432.316976ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1027 19:17:01.615349  267880 retry.go:31] will retry after 637.970488ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-647336 /tmp/TestFunctionalparallelMountCmdspecific-port807785502/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-647336 ssh "sudo umount -f /mount-9p": exit status 1 (318.530098ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-647336 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-647336 /tmp/TestFunctionalparallelMountCmdspecific-port807785502/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-647336 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1549481524/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-647336 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1549481524/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-647336 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1549481524/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-647336 ssh "findmnt -T" /mount1: exit status 1 (603.901303ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1027 19:17:04.006937  267880 retry.go:31] will retry after 439.895216ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-647336 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-647336 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-647336 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1549481524/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-647336 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1549481524/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-647336 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1549481524/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.03s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-647336
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-647336
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-647336
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (212.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1027 19:19:45.075825  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-881070 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m31.197262995s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (212.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 kubectl -- rollout status deployment/busybox
E1027 19:21:08.161297  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-881070 kubectl -- rollout status deployment/busybox: (4.627089041s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 kubectl -- exec busybox-7b57f96db7-5slqp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 kubectl -- exec busybox-7b57f96db7-62q89 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 kubectl -- exec busybox-7b57f96db7-xzq9m -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 kubectl -- exec busybox-7b57f96db7-5slqp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 kubectl -- exec busybox-7b57f96db7-62q89 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 kubectl -- exec busybox-7b57f96db7-xzq9m -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 kubectl -- exec busybox-7b57f96db7-5slqp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 kubectl -- exec busybox-7b57f96db7-62q89 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 kubectl -- exec busybox-7b57f96db7-xzq9m -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 kubectl -- exec busybox-7b57f96db7-5slqp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 kubectl -- exec busybox-7b57f96db7-5slqp -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 kubectl -- exec busybox-7b57f96db7-62q89 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 kubectl -- exec busybox-7b57f96db7-62q89 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 kubectl -- exec busybox-7b57f96db7-xzq9m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 kubectl -- exec busybox-7b57f96db7-xzq9m -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (58.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 node add --alsologtostderr -v 5
E1027 19:21:46.044597  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/functional-647336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:21:46.051211  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/functional-647336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:21:46.062538  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/functional-647336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:21:46.083929  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/functional-647336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:21:46.125395  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/functional-647336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:21:46.206828  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/functional-647336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:21:46.368305  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/functional-647336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:21:46.689915  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/functional-647336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:21:47.331538  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/functional-647336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:21:48.612959  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/functional-647336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:21:51.175098  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/functional-647336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:21:56.297002  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/functional-647336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:22:06.538454  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/functional-647336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-881070 node add --alsologtostderr -v 5: (57.522531652s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-881070 status --alsologtostderr -v 5: (1.019084624s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (58.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-881070 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.043966153s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-881070 status --output json --alsologtostderr -v 5: (1.06110831s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 cp testdata/cp-test.txt ha-881070:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 ssh -n ha-881070 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 cp ha-881070:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2234191356/001/cp-test_ha-881070.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 ssh -n ha-881070 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 cp ha-881070:/home/docker/cp-test.txt ha-881070-m02:/home/docker/cp-test_ha-881070_ha-881070-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 ssh -n ha-881070 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 ssh -n ha-881070-m02 "sudo cat /home/docker/cp-test_ha-881070_ha-881070-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 cp ha-881070:/home/docker/cp-test.txt ha-881070-m03:/home/docker/cp-test_ha-881070_ha-881070-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 ssh -n ha-881070 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 ssh -n ha-881070-m03 "sudo cat /home/docker/cp-test_ha-881070_ha-881070-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 cp ha-881070:/home/docker/cp-test.txt ha-881070-m04:/home/docker/cp-test_ha-881070_ha-881070-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 ssh -n ha-881070 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 ssh -n ha-881070-m04 "sudo cat /home/docker/cp-test_ha-881070_ha-881070-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 cp testdata/cp-test.txt ha-881070-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 ssh -n ha-881070-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 cp ha-881070-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2234191356/001/cp-test_ha-881070-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 ssh -n ha-881070-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 cp ha-881070-m02:/home/docker/cp-test.txt ha-881070:/home/docker/cp-test_ha-881070-m02_ha-881070.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 ssh -n ha-881070-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 ssh -n ha-881070 "sudo cat /home/docker/cp-test_ha-881070-m02_ha-881070.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 cp ha-881070-m02:/home/docker/cp-test.txt ha-881070-m03:/home/docker/cp-test_ha-881070-m02_ha-881070-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 ssh -n ha-881070-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 ssh -n ha-881070-m03 "sudo cat /home/docker/cp-test_ha-881070-m02_ha-881070-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 cp ha-881070-m02:/home/docker/cp-test.txt ha-881070-m04:/home/docker/cp-test_ha-881070-m02_ha-881070-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 ssh -n ha-881070-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 ssh -n ha-881070-m04 "sudo cat /home/docker/cp-test_ha-881070-m02_ha-881070-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 cp testdata/cp-test.txt ha-881070-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 ssh -n ha-881070-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 cp ha-881070-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2234191356/001/cp-test_ha-881070-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 ssh -n ha-881070-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 cp ha-881070-m03:/home/docker/cp-test.txt ha-881070:/home/docker/cp-test_ha-881070-m03_ha-881070.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 ssh -n ha-881070-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 ssh -n ha-881070 "sudo cat /home/docker/cp-test_ha-881070-m03_ha-881070.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 cp ha-881070-m03:/home/docker/cp-test.txt ha-881070-m02:/home/docker/cp-test_ha-881070-m03_ha-881070-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 ssh -n ha-881070-m03 "sudo cat /home/docker/cp-test.txt"
E1027 19:22:27.020439  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/functional-647336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 ssh -n ha-881070-m02 "sudo cat /home/docker/cp-test_ha-881070-m03_ha-881070-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 cp ha-881070-m03:/home/docker/cp-test.txt ha-881070-m04:/home/docker/cp-test_ha-881070-m03_ha-881070-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 ssh -n ha-881070-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 ssh -n ha-881070-m04 "sudo cat /home/docker/cp-test_ha-881070-m03_ha-881070-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 cp testdata/cp-test.txt ha-881070-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 ssh -n ha-881070-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 cp ha-881070-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2234191356/001/cp-test_ha-881070-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 ssh -n ha-881070-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 cp ha-881070-m04:/home/docker/cp-test.txt ha-881070:/home/docker/cp-test_ha-881070-m04_ha-881070.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 ssh -n ha-881070-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 ssh -n ha-881070 "sudo cat /home/docker/cp-test_ha-881070-m04_ha-881070.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 cp ha-881070-m04:/home/docker/cp-test.txt ha-881070-m02:/home/docker/cp-test_ha-881070-m04_ha-881070-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 ssh -n ha-881070-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 ssh -n ha-881070-m02 "sudo cat /home/docker/cp-test_ha-881070-m04_ha-881070-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 cp ha-881070-m04:/home/docker/cp-test.txt ha-881070-m03:/home/docker/cp-test_ha-881070-m04_ha-881070-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 ssh -n ha-881070-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 ssh -n ha-881070-m03 "sudo cat /home/docker/cp-test_ha-881070-m04_ha-881070-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-881070 node stop m02 --alsologtostderr -v 5: (12.239354799s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-881070 status --alsologtostderr -v 5: exit status 7 (812.740212ms)

                                                
                                                
-- stdout --
	ha-881070
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-881070-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-881070-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-881070-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:22:45.713332  311389 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:22:45.721417  311389 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:22:45.721439  311389 out.go:374] Setting ErrFile to fd 2...
	I1027 19:22:45.721446  311389 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:22:45.721732  311389 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 19:22:45.721965  311389 out.go:368] Setting JSON to false
	I1027 19:22:45.721997  311389 mustload.go:65] Loading cluster: ha-881070
	I1027 19:22:45.722510  311389 config.go:182] Loaded profile config "ha-881070": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:22:45.722531  311389 status.go:174] checking status of ha-881070 ...
	I1027 19:22:45.723093  311389 cli_runner.go:164] Run: docker container inspect ha-881070 --format={{.State.Status}}
	I1027 19:22:45.725924  311389 notify.go:220] Checking for updates...
	I1027 19:22:45.758556  311389 status.go:371] ha-881070 host status = "Running" (err=<nil>)
	I1027 19:22:45.758596  311389 host.go:66] Checking if "ha-881070" exists ...
	I1027 19:22:45.759137  311389 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-881070
	I1027 19:22:45.799189  311389 host.go:66] Checking if "ha-881070" exists ...
	I1027 19:22:45.799562  311389 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 19:22:45.799634  311389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-881070
	I1027 19:22:45.819434  311389 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/ha-881070/id_rsa Username:docker}
	I1027 19:22:45.924246  311389 ssh_runner.go:195] Run: systemctl --version
	I1027 19:22:45.931287  311389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:22:45.948205  311389 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:22:46.008778  311389 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:true NGoroutines:72 SystemTime:2025-10-27 19:22:45.997414166 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 19:22:46.009530  311389 kubeconfig.go:125] found "ha-881070" server: "https://192.168.49.254:8443"
	I1027 19:22:46.009584  311389 api_server.go:166] Checking apiserver status ...
	I1027 19:22:46.009637  311389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:22:46.022199  311389 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1267/cgroup
	I1027 19:22:46.031521  311389 api_server.go:182] apiserver freezer: "5:freezer:/docker/147de85143673a939ccbcb80835d95ae6df12dc018d0d55d2260dc9d7eaaf3c1/crio/crio-48e83304d4c9a41e4734d99c8bf485eccaa73400baaa4bc1aeb314212c354c3d"
	I1027 19:22:46.031638  311389 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/147de85143673a939ccbcb80835d95ae6df12dc018d0d55d2260dc9d7eaaf3c1/crio/crio-48e83304d4c9a41e4734d99c8bf485eccaa73400baaa4bc1aeb314212c354c3d/freezer.state
	I1027 19:22:46.039577  311389 api_server.go:204] freezer state: "THAWED"
	I1027 19:22:46.039618  311389 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1027 19:22:46.048132  311389 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1027 19:22:46.048169  311389 status.go:463] ha-881070 apiserver status = Running (err=<nil>)
	I1027 19:22:46.048217  311389 status.go:176] ha-881070 status: &{Name:ha-881070 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 19:22:46.048240  311389 status.go:174] checking status of ha-881070-m02 ...
	I1027 19:22:46.048689  311389 cli_runner.go:164] Run: docker container inspect ha-881070-m02 --format={{.State.Status}}
	I1027 19:22:46.067870  311389 status.go:371] ha-881070-m02 host status = "Stopped" (err=<nil>)
	I1027 19:22:46.067894  311389 status.go:384] host is not running, skipping remaining checks
	I1027 19:22:46.067901  311389 status.go:176] ha-881070-m02 status: &{Name:ha-881070-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 19:22:46.067921  311389 status.go:174] checking status of ha-881070-m03 ...
	I1027 19:22:46.068246  311389 cli_runner.go:164] Run: docker container inspect ha-881070-m03 --format={{.State.Status}}
	I1027 19:22:46.089320  311389 status.go:371] ha-881070-m03 host status = "Running" (err=<nil>)
	I1027 19:22:46.089352  311389 host.go:66] Checking if "ha-881070-m03" exists ...
	I1027 19:22:46.089682  311389 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-881070-m03
	I1027 19:22:46.108444  311389 host.go:66] Checking if "ha-881070-m03" exists ...
	I1027 19:22:46.108766  311389 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 19:22:46.108812  311389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-881070-m03
	I1027 19:22:46.127720  311389 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/ha-881070-m03/id_rsa Username:docker}
	I1027 19:22:46.233104  311389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:22:46.254352  311389 kubeconfig.go:125] found "ha-881070" server: "https://192.168.49.254:8443"
	I1027 19:22:46.254382  311389 api_server.go:166] Checking apiserver status ...
	I1027 19:22:46.254461  311389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:22:46.267803  311389 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1175/cgroup
	I1027 19:22:46.278354  311389 api_server.go:182] apiserver freezer: "5:freezer:/docker/f2918d266a9ea054cac6369a0c5318c828818ea1d40e4c27fee925930a32cf58/crio/crio-9123a625c50c851b7e77759de0f217bbbe76a3a8580ae2f365df7f261e93e479"
	I1027 19:22:46.278427  311389 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f2918d266a9ea054cac6369a0c5318c828818ea1d40e4c27fee925930a32cf58/crio/crio-9123a625c50c851b7e77759de0f217bbbe76a3a8580ae2f365df7f261e93e479/freezer.state
	I1027 19:22:46.286280  311389 api_server.go:204] freezer state: "THAWED"
	I1027 19:22:46.286319  311389 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1027 19:22:46.295452  311389 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1027 19:22:46.295479  311389 status.go:463] ha-881070-m03 apiserver status = Running (err=<nil>)
	I1027 19:22:46.295517  311389 status.go:176] ha-881070-m03 status: &{Name:ha-881070-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 19:22:46.295543  311389 status.go:174] checking status of ha-881070-m04 ...
	I1027 19:22:46.295864  311389 cli_runner.go:164] Run: docker container inspect ha-881070-m04 --format={{.State.Status}}
	I1027 19:22:46.312865  311389 status.go:371] ha-881070-m04 host status = "Running" (err=<nil>)
	I1027 19:22:46.312910  311389 host.go:66] Checking if "ha-881070-m04" exists ...
	I1027 19:22:46.313218  311389 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-881070-m04
	I1027 19:22:46.334676  311389 host.go:66] Checking if "ha-881070-m04" exists ...
	I1027 19:22:46.335030  311389 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 19:22:46.335078  311389 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-881070-m04
	I1027 19:22:46.355221  311389 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/ha-881070-m04/id_rsa Username:docker}
	I1027 19:22:46.460412  311389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:22:46.474152  311389 status.go:176] ha-881070-m04 status: &{Name:ha-881070-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (27.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 node start m02 --alsologtostderr -v 5
E1027 19:23:07.982455  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/functional-647336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-881070 node start m02 --alsologtostderr -v 5: (25.948013965s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-881070 status --alsologtostderr -v 5: (1.318361216s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (27.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.221926779s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (127.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-881070 stop --alsologtostderr -v 5: (27.30286425s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 start --wait true --alsologtostderr -v 5
E1027 19:24:29.903877  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/functional-647336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:24:45.062963  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-881070 start --wait true --alsologtostderr -v 5: (1m39.901007676s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (127.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-881070 node delete m03 --alsologtostderr -v 5: (10.669230138s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-881070 stop --alsologtostderr -v 5: (35.906338495s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-881070 status --alsologtostderr -v 5: exit status 7 (110.278234ms)

                                                
                                                
-- stdout --
	ha-881070
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-881070-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-881070-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:26:11.742620  323301 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:26:11.742740  323301 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:26:11.742752  323301 out.go:374] Setting ErrFile to fd 2...
	I1027 19:26:11.742756  323301 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:26:11.743059  323301 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 19:26:11.743257  323301 out.go:368] Setting JSON to false
	I1027 19:26:11.743301  323301 mustload.go:65] Loading cluster: ha-881070
	I1027 19:26:11.743373  323301 notify.go:220] Checking for updates...
	I1027 19:26:11.744285  323301 config.go:182] Loaded profile config "ha-881070": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:26:11.744311  323301 status.go:174] checking status of ha-881070 ...
	I1027 19:26:11.744883  323301 cli_runner.go:164] Run: docker container inspect ha-881070 --format={{.State.Status}}
	I1027 19:26:11.762315  323301 status.go:371] ha-881070 host status = "Stopped" (err=<nil>)
	I1027 19:26:11.762339  323301 status.go:384] host is not running, skipping remaining checks
	I1027 19:26:11.762346  323301 status.go:176] ha-881070 status: &{Name:ha-881070 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 19:26:11.762370  323301 status.go:174] checking status of ha-881070-m02 ...
	I1027 19:26:11.762677  323301 cli_runner.go:164] Run: docker container inspect ha-881070-m02 --format={{.State.Status}}
	I1027 19:26:11.788984  323301 status.go:371] ha-881070-m02 host status = "Stopped" (err=<nil>)
	I1027 19:26:11.789010  323301 status.go:384] host is not running, skipping remaining checks
	I1027 19:26:11.789025  323301 status.go:176] ha-881070-m02 status: &{Name:ha-881070-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 19:26:11.789055  323301 status.go:174] checking status of ha-881070-m04 ...
	I1027 19:26:11.789342  323301 cli_runner.go:164] Run: docker container inspect ha-881070-m04 --format={{.State.Status}}
	I1027 19:26:11.806095  323301 status.go:371] ha-881070-m04 host status = "Stopped" (err=<nil>)
	I1027 19:26:11.806118  323301 status.go:384] host is not running, skipping remaining checks
	I1027 19:26:11.806124  323301 status.go:176] ha-881070-m04 status: &{Name:ha-881070-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (74.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1027 19:26:46.044477  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/functional-647336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:27:13.748099  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/functional-647336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-881070 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m13.891989844s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (74.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (92.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-881070 node add --control-plane --alsologtostderr -v 5: (1m31.280618233s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-881070 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-881070 status --alsologtostderr -v 5: (1.085787081s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (92.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.403780183s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.40s)

                                                
                                    
x
+
TestJSONOutput/start/Command (79.61s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-368309 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1027 19:29:45.073501  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-368309 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m19.60499384s)
--- PASS: TestJSONOutput/start/Command (79.61s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.8s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-368309 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-368309 --output=json --user=testUser: (5.803747955s)
--- PASS: TestJSONOutput/stop/Command (5.80s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-233493 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-233493 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (96.671898ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b8953a74-5a01-4a04-95d2-6bad69174a5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-233493] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5f492db3-b948-4b88-b7fa-40adede5f58b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21801"}}
	{"specversion":"1.0","id":"29500f7e-2a66-46bf-b521-fd1366653dee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c451fa37-70ad-4228-9d7e-affb53dc8886","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21801-266035/kubeconfig"}}
	{"specversion":"1.0","id":"b19063ff-d64f-432d-b743-f9ed1f7bbdea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-266035/.minikube"}}
	{"specversion":"1.0","id":"20390a4c-b100-4295-bbbc-95c4ea66ed45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"e0a43908-0ca5-4bab-8501-a210f75281f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1748b44a-5c9a-4488-903c-5948a828c73e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-233493" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-233493
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.98s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-305765 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-305765 --network=: (38.769110956s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-305765" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-305765
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-305765: (2.191279743s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.98s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.52s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-194791 --network=bridge
E1027 19:31:46.044521  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/functional-647336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-194791 --network=bridge: (33.156550505s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-194791" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-194791
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-194791: (2.326131689s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.52s)

                                                
                                    
x
+
TestKicExistingNetwork (39.6s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1027 19:32:00.527914  267880 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1027 19:32:00.552055  267880 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1027 19:32:00.552156  267880 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1027 19:32:00.552188  267880 cli_runner.go:164] Run: docker network inspect existing-network
W1027 19:32:00.572373  267880 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1027 19:32:00.572406  267880 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1027 19:32:00.572422  267880 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1027 19:32:00.572534  267880 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1027 19:32:00.593342  267880 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-74ee89127400 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:aa:c7:29:bd:7a:a9} reservation:<nil>}
I1027 19:32:00.593854  267880 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40015b7690}
I1027 19:32:00.593884  267880 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1027 19:32:00.593943  267880 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1027 19:32:00.658191  267880 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-220477 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-220477 --network=existing-network: (37.216550803s)
helpers_test.go:175: Cleaning up "existing-network-220477" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-220477
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-220477: (2.211481085s)
I1027 19:32:40.108478  267880 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (39.60s)

                                                
                                    
x
+
TestKicCustomSubnet (40.68s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-492989 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-492989 --subnet=192.168.60.0/24: (38.459776506s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-492989 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-492989" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-492989
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-492989: (2.2007538s)
--- PASS: TestKicCustomSubnet (40.68s)

                                                
                                    
x
+
TestKicStaticIP (35.54s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-974338 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-974338 --static-ip=192.168.200.200: (33.200401373s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-974338 ip
helpers_test.go:175: Cleaning up "static-ip-974338" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-974338
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-974338: (2.177529693s)
--- PASS: TestKicStaticIP (35.54s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (74.29s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-235889 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-235889 --driver=docker  --container-runtime=crio: (33.836392839s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-238318 --driver=docker  --container-runtime=crio
E1027 19:34:45.069328  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-238318 --driver=docker  --container-runtime=crio: (34.847091019s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-235889
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-238318
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-238318" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-238318
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-238318: (2.062790234s)
helpers_test.go:175: Cleaning up "first-235889" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-235889
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-235889: (2.099429791s)
--- PASS: TestMinikubeProfile (74.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.66s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-591561 --memory=3072 --mount-string /tmp/TestMountStartserial3669532973/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-591561 --memory=3072 --mount-string /tmp/TestMountStartserial3669532973/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (9.662279361s)
--- PASS: TestMountStart/serial/StartWithMountFirst (10.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-591561 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.39s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-593350 --memory=3072 --mount-string /tmp/TestMountStartserial3669532973/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-593350 --memory=3072 --mount-string /tmp/TestMountStartserial3669532973/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.39319363s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-593350 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-591561 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-591561 --alsologtostderr -v=5: (1.715616911s)
--- PASS: TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-593350 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-593350
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-593350: (1.292097009s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.76s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-593350
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-593350: (6.760013984s)
--- PASS: TestMountStart/serial/RestartStopped (7.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-593350 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (134.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-198354 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1027 19:36:46.044754  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/functional-647336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:37:48.162671  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-198354 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m13.978027298s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (134.50s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-198354 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-198354 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-198354 -- rollout status deployment/busybox: (3.273229491s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-198354 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-198354 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-198354 -- exec busybox-7b57f96db7-2xbjm -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-198354 -- exec busybox-7b57f96db7-4w79t -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-198354 -- exec busybox-7b57f96db7-2xbjm -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-198354 -- exec busybox-7b57f96db7-4w79t -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-198354 -- exec busybox-7b57f96db7-2xbjm -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-198354 -- exec busybox-7b57f96db7-4w79t -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.04s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-198354 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-198354 -- exec busybox-7b57f96db7-2xbjm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-198354 -- exec busybox-7b57f96db7-2xbjm -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-198354 -- exec busybox-7b57f96db7-4w79t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-198354 -- exec busybox-7b57f96db7-4w79t -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.90s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (58.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-198354 -v=5 --alsologtostderr
E1027 19:38:09.109469  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/functional-647336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-198354 -v=5 --alsologtostderr: (57.848173398s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (58.58s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-198354 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 cp testdata/cp-test.txt multinode-198354:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 ssh -n multinode-198354 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 cp multinode-198354:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3656579192/001/cp-test_multinode-198354.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 ssh -n multinode-198354 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 cp multinode-198354:/home/docker/cp-test.txt multinode-198354-m02:/home/docker/cp-test_multinode-198354_multinode-198354-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 ssh -n multinode-198354 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 ssh -n multinode-198354-m02 "sudo cat /home/docker/cp-test_multinode-198354_multinode-198354-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 cp multinode-198354:/home/docker/cp-test.txt multinode-198354-m03:/home/docker/cp-test_multinode-198354_multinode-198354-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 ssh -n multinode-198354 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 ssh -n multinode-198354-m03 "sudo cat /home/docker/cp-test_multinode-198354_multinode-198354-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 cp testdata/cp-test.txt multinode-198354-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 ssh -n multinode-198354-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 cp multinode-198354-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3656579192/001/cp-test_multinode-198354-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 ssh -n multinode-198354-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 cp multinode-198354-m02:/home/docker/cp-test.txt multinode-198354:/home/docker/cp-test_multinode-198354-m02_multinode-198354.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 ssh -n multinode-198354-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 ssh -n multinode-198354 "sudo cat /home/docker/cp-test_multinode-198354-m02_multinode-198354.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 cp multinode-198354-m02:/home/docker/cp-test.txt multinode-198354-m03:/home/docker/cp-test_multinode-198354-m02_multinode-198354-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 ssh -n multinode-198354-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 ssh -n multinode-198354-m03 "sudo cat /home/docker/cp-test_multinode-198354-m02_multinode-198354-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 cp testdata/cp-test.txt multinode-198354-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 ssh -n multinode-198354-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 cp multinode-198354-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3656579192/001/cp-test_multinode-198354-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 ssh -n multinode-198354-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 cp multinode-198354-m03:/home/docker/cp-test.txt multinode-198354:/home/docker/cp-test_multinode-198354-m03_multinode-198354.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 ssh -n multinode-198354-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 ssh -n multinode-198354 "sudo cat /home/docker/cp-test_multinode-198354-m03_multinode-198354.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 cp multinode-198354-m03:/home/docker/cp-test.txt multinode-198354-m02:/home/docker/cp-test_multinode-198354-m03_multinode-198354-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 ssh -n multinode-198354-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 ssh -n multinode-198354-m02 "sudo cat /home/docker/cp-test_multinode-198354-m03_multinode-198354-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.39s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-198354 node stop m03: (1.321790171s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-198354 status: exit status 7 (559.670534ms)

                                                
                                                
-- stdout --
	multinode-198354
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-198354-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-198354-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-198354 status --alsologtostderr: exit status 7 (537.250957ms)

                                                
                                                
-- stdout --
	multinode-198354
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-198354-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-198354-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:39:16.639591  373639 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:39:16.648948  373639 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:39:16.649011  373639 out.go:374] Setting ErrFile to fd 2...
	I1027 19:39:16.649034  373639 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:39:16.649339  373639 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 19:39:16.649598  373639 out.go:368] Setting JSON to false
	I1027 19:39:16.649656  373639 mustload.go:65] Loading cluster: multinode-198354
	I1027 19:39:16.650211  373639 config.go:182] Loaded profile config "multinode-198354": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:39:16.650256  373639 status.go:174] checking status of multinode-198354 ...
	I1027 19:39:16.650840  373639 cli_runner.go:164] Run: docker container inspect multinode-198354 --format={{.State.Status}}
	I1027 19:39:16.651167  373639 notify.go:220] Checking for updates...
	I1027 19:39:16.669151  373639 status.go:371] multinode-198354 host status = "Running" (err=<nil>)
	I1027 19:39:16.669172  373639 host.go:66] Checking if "multinode-198354" exists ...
	I1027 19:39:16.669466  373639 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-198354
	I1027 19:39:16.692283  373639 host.go:66] Checking if "multinode-198354" exists ...
	I1027 19:39:16.692588  373639 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 19:39:16.692643  373639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-198354
	I1027 19:39:16.709947  373639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33263 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/multinode-198354/id_rsa Username:docker}
	I1027 19:39:16.812642  373639 ssh_runner.go:195] Run: systemctl --version
	I1027 19:39:16.819580  373639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:39:16.832543  373639 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:39:16.894221  373639 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-27 19:39:16.88477249 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 19:39:16.894803  373639 kubeconfig.go:125] found "multinode-198354" server: "https://192.168.67.2:8443"
	I1027 19:39:16.894843  373639 api_server.go:166] Checking apiserver status ...
	I1027 19:39:16.894891  373639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:39:16.906830  373639 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1218/cgroup
	I1027 19:39:16.915676  373639 api_server.go:182] apiserver freezer: "5:freezer:/docker/477ccda0aab9e8f61b12927c363443bdd293052e1f7922ab7177ed6ea0d38de1/crio/crio-4392f695de342b28cdc41dfeb184383fa0fdcbbc134d4e4829c17617f7953e67"
	I1027 19:39:16.915746  373639 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/477ccda0aab9e8f61b12927c363443bdd293052e1f7922ab7177ed6ea0d38de1/crio/crio-4392f695de342b28cdc41dfeb184383fa0fdcbbc134d4e4829c17617f7953e67/freezer.state
	I1027 19:39:16.923674  373639 api_server.go:204] freezer state: "THAWED"
	I1027 19:39:16.923705  373639 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1027 19:39:16.931789  373639 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1027 19:39:16.931816  373639 status.go:463] multinode-198354 apiserver status = Running (err=<nil>)
	I1027 19:39:16.931828  373639 status.go:176] multinode-198354 status: &{Name:multinode-198354 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 19:39:16.931845  373639 status.go:174] checking status of multinode-198354-m02 ...
	I1027 19:39:16.932149  373639 cli_runner.go:164] Run: docker container inspect multinode-198354-m02 --format={{.State.Status}}
	I1027 19:39:16.949409  373639 status.go:371] multinode-198354-m02 host status = "Running" (err=<nil>)
	I1027 19:39:16.949434  373639 host.go:66] Checking if "multinode-198354-m02" exists ...
	I1027 19:39:16.949747  373639 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-198354-m02
	I1027 19:39:16.966816  373639 host.go:66] Checking if "multinode-198354-m02" exists ...
	I1027 19:39:16.967222  373639 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 19:39:16.967279  373639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-198354-m02
	I1027 19:39:16.984567  373639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33268 SSHKeyPath:/home/jenkins/minikube-integration/21801-266035/.minikube/machines/multinode-198354-m02/id_rsa Username:docker}
	I1027 19:39:17.088194  373639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:39:17.100619  373639 status.go:176] multinode-198354-m02 status: &{Name:multinode-198354-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1027 19:39:17.100655  373639 status.go:174] checking status of multinode-198354-m03 ...
	I1027 19:39:17.100947  373639 cli_runner.go:164] Run: docker container inspect multinode-198354-m03 --format={{.State.Status}}
	I1027 19:39:17.118421  373639 status.go:371] multinode-198354-m03 host status = "Stopped" (err=<nil>)
	I1027 19:39:17.118445  373639 status.go:384] host is not running, skipping remaining checks
	I1027 19:39:17.118453  373639 status.go:176] multinode-198354-m03 status: &{Name:multinode-198354-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.42s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-198354 node start m03 -v=5 --alsologtostderr: (7.760244379s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.54s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-198354
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-198354
E1027 19:39:45.063661  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-198354: (25.014490898s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-198354 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-198354 --wait=true -v=5 --alsologtostderr: (53.775937996s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-198354
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.92s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-198354 node delete m03: (5.13848889s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.85s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-198354 stop: (23.787619804s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-198354 status: exit status 7 (100.694238ms)

                                                
                                                
-- stdout --
	multinode-198354
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-198354-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-198354 status --alsologtostderr: exit status 7 (103.470844ms)

                                                
                                                
-- stdout --
	multinode-198354
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-198354-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:41:14.378846  381411 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:41:14.379048  381411 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:41:14.379080  381411 out.go:374] Setting ErrFile to fd 2...
	I1027 19:41:14.379102  381411 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:41:14.379388  381411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 19:41:14.379616  381411 out.go:368] Setting JSON to false
	I1027 19:41:14.379686  381411 mustload.go:65] Loading cluster: multinode-198354
	I1027 19:41:14.379761  381411 notify.go:220] Checking for updates...
	I1027 19:41:14.380718  381411 config.go:182] Loaded profile config "multinode-198354": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:41:14.380763  381411 status.go:174] checking status of multinode-198354 ...
	I1027 19:41:14.381303  381411 cli_runner.go:164] Run: docker container inspect multinode-198354 --format={{.State.Status}}
	I1027 19:41:14.399966  381411 status.go:371] multinode-198354 host status = "Stopped" (err=<nil>)
	I1027 19:41:14.399987  381411 status.go:384] host is not running, skipping remaining checks
	I1027 19:41:14.399994  381411 status.go:176] multinode-198354 status: &{Name:multinode-198354 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 19:41:14.400043  381411 status.go:174] checking status of multinode-198354-m02 ...
	I1027 19:41:14.400344  381411 cli_runner.go:164] Run: docker container inspect multinode-198354-m02 --format={{.State.Status}}
	I1027 19:41:14.430018  381411 status.go:371] multinode-198354-m02 host status = "Stopped" (err=<nil>)
	I1027 19:41:14.430045  381411 status.go:384] host is not running, skipping remaining checks
	I1027 19:41:14.430051  381411 status.go:176] multinode-198354-m02 status: &{Name:multinode-198354-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.99s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-198354 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1027 19:41:46.044651  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/functional-647336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-198354 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (50.159455822s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-198354 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.86s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-198354
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-198354-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-198354-m02 --driver=docker  --container-runtime=crio: exit status 14 (94.060229ms)

                                                
                                                
-- stdout --
	* [multinode-198354-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21801
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21801-266035/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-266035/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-198354-m02' is duplicated with machine name 'multinode-198354-m02' in profile 'multinode-198354'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-198354-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-198354-m03 --driver=docker  --container-runtime=crio: (33.352390879s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-198354
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-198354: exit status 80 (348.068681ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-198354 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-198354-m03 already exists in multinode-198354-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-198354-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-198354-m03: (2.061886472s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.91s)

                                                
                                    
x
+
TestPreload (158.92s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-882161 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-882161 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m2.847807651s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-882161 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-882161 image pull gcr.io/k8s-minikube/busybox: (2.126869628s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-882161
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-882161: (5.949364621s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-882161 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1027 19:44:45.063220  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-882161 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m25.361138891s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-882161 image list
helpers_test.go:175: Cleaning up "test-preload-882161" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-882161
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-882161: (2.397385498s)
--- PASS: TestPreload (158.92s)

                                                
                                    
x
+
TestScheduledStopUnix (109.64s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-819261 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-819261 --memory=3072 --driver=docker  --container-runtime=crio: (33.882874795s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-819261 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-819261 -n scheduled-stop-819261
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-819261 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1027 19:45:58.938620  267880 retry.go:31] will retry after 147.561µs: open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/scheduled-stop-819261/pid: no such file or directory
I1027 19:45:58.939802  267880 retry.go:31] will retry after 166.292µs: open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/scheduled-stop-819261/pid: no such file or directory
I1027 19:45:58.940909  267880 retry.go:31] will retry after 226.44µs: open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/scheduled-stop-819261/pid: no such file or directory
I1027 19:45:58.941991  267880 retry.go:31] will retry after 392.305µs: open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/scheduled-stop-819261/pid: no such file or directory
I1027 19:45:58.943106  267880 retry.go:31] will retry after 429.008µs: open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/scheduled-stop-819261/pid: no such file or directory
I1027 19:45:58.944191  267880 retry.go:31] will retry after 552.607µs: open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/scheduled-stop-819261/pid: no such file or directory
I1027 19:45:58.945282  267880 retry.go:31] will retry after 592.264µs: open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/scheduled-stop-819261/pid: no such file or directory
I1027 19:45:58.946396  267880 retry.go:31] will retry after 2.366977ms: open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/scheduled-stop-819261/pid: no such file or directory
I1027 19:45:58.949580  267880 retry.go:31] will retry after 3.218276ms: open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/scheduled-stop-819261/pid: no such file or directory
I1027 19:45:58.955226  267880 retry.go:31] will retry after 4.543145ms: open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/scheduled-stop-819261/pid: no such file or directory
I1027 19:45:58.960442  267880 retry.go:31] will retry after 6.420121ms: open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/scheduled-stop-819261/pid: no such file or directory
I1027 19:45:58.967686  267880 retry.go:31] will retry after 11.410494ms: open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/scheduled-stop-819261/pid: no such file or directory
I1027 19:45:58.980004  267880 retry.go:31] will retry after 9.879553ms: open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/scheduled-stop-819261/pid: no such file or directory
I1027 19:45:58.990275  267880 retry.go:31] will retry after 28.244437ms: open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/scheduled-stop-819261/pid: no such file or directory
I1027 19:45:59.019563  267880 retry.go:31] will retry after 43.221601ms: open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/scheduled-stop-819261/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-819261 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-819261 -n scheduled-stop-819261
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-819261
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-819261 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1027 19:46:46.044742  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/functional-647336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-819261
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-819261: exit status 7 (70.142961ms)

                                                
                                                
-- stdout --
	scheduled-stop-819261
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-819261 -n scheduled-stop-819261
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-819261 -n scheduled-stop-819261: exit status 7 (77.574653ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-819261" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-819261
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-819261: (4.169812347s)
--- PASS: TestScheduledStopUnix (109.64s)

                                                
                                    
x
+
TestInsufficientStorage (13.79s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-264452 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-264452 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (11.213524886s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5b2ccc42-187d-42b5-98f8-dd24ff0fea71","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-264452] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"64057c5a-9463-4138-9e74-8d5b88e10bf4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21801"}}
	{"specversion":"1.0","id":"3de94807-53ba-41bf-825c-26fef88f9ba7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"9c3367a9-7bf5-4660-bcf8-16303e21bacd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21801-266035/kubeconfig"}}
	{"specversion":"1.0","id":"1e732b41-0256-4b23-b388-2b42dc6c2cac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-266035/.minikube"}}
	{"specversion":"1.0","id":"e6cdc6f5-60a8-4a4e-8953-06232f072439","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"6f7f85b9-31fb-499a-9d68-daef968d1ad9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d06d1984-4c49-4e3d-912c-9727b7829c20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"5798f5cf-84b2-4361-b200-fd7c0a701688","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"9e24ab07-65f8-4e74-a2ad-2cf830fcffdb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"2b4e45e2-7877-45e6-83dd-9440e603b6be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"3baf5211-715e-4fff-91d4-842f0408a423","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-264452\" primary control-plane node in \"insufficient-storage-264452\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"573b4ab6-5d46-4757-b5ac-e09dfb489a93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760939008-21773 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"e2369d64-9639-482b-9a6c-d8e1acc84406","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"99f178a7-e066-4fdd-b924-1e768a4db0bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-264452 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-264452 --output=json --layout=cluster: exit status 7 (305.735963ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-264452","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-264452","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1027 19:47:25.696204  397527 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-264452" does not appear in /home/jenkins/minikube-integration/21801-266035/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-264452 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-264452 --output=json --layout=cluster: exit status 7 (310.172827ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-264452","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-264452","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1027 19:47:26.007148  397594 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-264452" does not appear in /home/jenkins/minikube-integration/21801-266035/kubeconfig
	E1027 19:47:26.017645  397594 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/insufficient-storage-264452/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-264452" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-264452
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-264452: (1.958451813s)
--- PASS: TestInsufficientStorage (13.79s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (56.27s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2840842589 start -p running-upgrade-048851 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2840842589 start -p running-upgrade-048851 --memory=3072 --vm-driver=docker  --container-runtime=crio: (32.491696681s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-048851 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-048851 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.663154359s)
helpers_test.go:175: Cleaning up "running-upgrade-048851" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-048851
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-048851: (2.043146124s)
--- PASS: TestRunningBinaryUpgrade (56.27s)

                                                
                                    
x
+
TestKubernetesUpgrade (450.44s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-524430 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-524430 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (41.482184617s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-524430
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-524430: (1.450615923s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-524430 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-524430 status --format={{.Host}}: exit status 7 (104.576359ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-524430 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1027 19:49:45.062428  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-524430 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m38.799075802s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-524430 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-524430 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-524430 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (155.954134ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-524430] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21801
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21801-266035/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-266035/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-524430
	    minikube start -p kubernetes-upgrade-524430 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5244302 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-524430 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-524430 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-524430 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (2m5.693722731s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-524430" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-524430
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-524430: (2.550795192s)
--- PASS: TestKubernetesUpgrade (450.44s)

                                                
                                    
x
+
TestMissingContainerUpgrade (121.36s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.3217858879 start -p missing-upgrade-033557 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.3217858879 start -p missing-upgrade-033557 --memory=3072 --driver=docker  --container-runtime=crio: (1m5.043296657s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-033557
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-033557
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-033557 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-033557 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (49.938071422s)
helpers_test.go:175: Cleaning up "missing-upgrade-033557" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-033557
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-033557: (2.117477502s)
--- PASS: TestMissingContainerUpgrade (121.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-358331 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-358331 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (101.437249ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-358331] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21801
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21801-266035/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-266035/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-358331 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-358331 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (42.797317911s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-358331 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-358331 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-358331 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (14.485558658s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-358331 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-358331 status -o json: exit status 2 (477.44762ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-358331","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-358331
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-358331: (2.419599363s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-358331 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-358331 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (9.94550645s)
--- PASS: TestNoKubernetes/serial/Start (9.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-358331 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-358331 "sudo systemctl is-active --quiet service kubelet": exit status 1 (277.85906ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-358331
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-358331: (1.279990245s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-358331 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-358331 --driver=docker  --container-runtime=crio: (6.714395971s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-358331 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-358331 "sudo systemctl is-active --quiet service kubelet": exit status 1 (281.716564ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.39s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.39s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (56.53s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1909242510 start -p stopped-upgrade-296733 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1909242510 start -p stopped-upgrade-296733 --memory=3072 --vm-driver=docker  --container-runtime=crio: (35.727794858s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1909242510 -p stopped-upgrade-296733 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1909242510 -p stopped-upgrade-296733 stop: (1.228352564s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-296733 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-296733 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (19.568660877s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (56.53s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-296733
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-296733: (1.144674281s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.14s)

                                                
                                    
x
+
TestPause/serial/Start (84.98s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-470021 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1027 19:51:46.044500  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/functional-647336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-470021 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m24.975639608s)
--- PASS: TestPause/serial/Start (84.98s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (28.84s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-470021 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-470021 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (28.822835459s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (28.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-750423 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-750423 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (194.75362ms)

                                                
                                                
-- stdout --
	* [false-750423] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21801
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21801-266035/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-266035/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:54:18.153029  435519 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:54:18.153196  435519 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:54:18.153228  435519 out.go:374] Setting ErrFile to fd 2...
	I1027 19:54:18.153250  435519 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:54:18.153541  435519 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-266035/.minikube/bin
	I1027 19:54:18.154053  435519 out.go:368] Setting JSON to false
	I1027 19:54:18.155042  435519 start.go:131] hostinfo: {"hostname":"ip-172-31-21-244","uptime":9411,"bootTime":1761585448,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1027 19:54:18.155143  435519 start.go:141] virtualization:  
	I1027 19:54:18.158540  435519 out.go:179] * [false-750423] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1027 19:54:18.161720  435519 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 19:54:18.161792  435519 notify.go:220] Checking for updates...
	I1027 19:54:18.167788  435519 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 19:54:18.170685  435519 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-266035/kubeconfig
	I1027 19:54:18.173638  435519 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-266035/.minikube
	I1027 19:54:18.177438  435519 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1027 19:54:18.180461  435519 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 19:54:18.183904  435519 config.go:182] Loaded profile config "kubernetes-upgrade-524430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:54:18.184024  435519 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 19:54:18.219701  435519 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1027 19:54:18.219836  435519 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:54:18.277164  435519 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-27 19:54:18.265252685 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1027 19:54:18.277278  435519 docker.go:318] overlay module found
	I1027 19:54:18.280528  435519 out.go:179] * Using the docker driver based on user configuration
	I1027 19:54:18.283403  435519 start.go:305] selected driver: docker
	I1027 19:54:18.283421  435519 start.go:925] validating driver "docker" against <nil>
	I1027 19:54:18.283435  435519 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 19:54:18.287116  435519 out.go:203] 
	W1027 19:54:18.289996  435519 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1027 19:54:18.292753  435519 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-750423 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-750423

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-750423

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-750423

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-750423

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-750423

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-750423

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-750423

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-750423

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-750423

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-750423

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-750423"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-750423"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-750423"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-750423

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-750423"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-750423"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-750423" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-750423" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-750423" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-750423" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-750423" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-750423" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-750423" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-750423" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-750423"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-750423"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-750423"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-750423"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-750423"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-750423" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-750423" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-750423" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-750423"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-750423"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-750423"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-750423"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-750423"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Oct 2025 19:54:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-524430
contexts:
- context:
cluster: kubernetes-upgrade-524430
extensions:
- extension:
last-update: Mon, 27 Oct 2025 19:54:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-524430
name: kubernetes-upgrade-524430
current-context: kubernetes-upgrade-524430
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-524430
user:
client-certificate: /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/kubernetes-upgrade-524430/client.crt
client-key: /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/kubernetes-upgrade-524430/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-750423

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-750423"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-750423"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-750423"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-750423"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-750423"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-750423"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-750423"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-750423"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-750423"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-750423"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-750423"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-750423"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-750423"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-750423"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-750423"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-750423"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-750423"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-750423"

                                                
                                                
----------------------- debugLogs end: false-750423 [took: 3.242932214s] --------------------------------
helpers_test.go:175: Cleaning up "false-750423" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-750423
--- PASS: TestNetworkPlugins/group/false (3.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (62.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-942644 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-942644 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m2.356393506s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (62.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-942644 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [54b4e94f-69ec-4136-8574-9416e44e9e48] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [54b4e94f-69ec-4136-8574-9416e44e9e48] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003776286s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-942644 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-942644 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-942644 --alsologtostderr -v=3: (11.994310214s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-942644 -n old-k8s-version-942644
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-942644 -n old-k8s-version-942644: exit status 7 (65.625286ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-942644 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (55.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-942644 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-942644 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (55.209973096s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-942644 -n old-k8s-version-942644
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (55.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (77.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-300878 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-300878 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m17.025220682s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (77.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-5rbpv" [0bd4a580-5e95-40f0-bbcc-10838ef4c773] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003482934s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-5rbpv" [0bd4a580-5e95-40f0-bbcc-10838ef4c773] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003590839s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-942644 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-942644 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (85.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-629838 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1027 19:59:45.062450  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-629838 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m25.107416099s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (85.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-300878 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [6e0e7212-11a8-40cb-8e65-ee62a4a0c520] Pending
helpers_test.go:352: "busybox" [6e0e7212-11a8-40cb-8e65-ee62a4a0c520] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [6e0e7212-11a8-40cb-8e65-ee62a4a0c520] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003446112s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-300878 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-300878 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-300878 --alsologtostderr -v=3: (12.029128797s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-300878 -n no-preload-300878
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-300878 -n no-preload-300878: exit status 7 (79.733373ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-300878 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (53.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-300878 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-300878 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (53.048353679s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-300878 -n no-preload-300878
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (53.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-629838 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [00b9d871-3c8b-42a7-9c24-e1ac939805c4] Pending
helpers_test.go:352: "busybox" [00b9d871-3c8b-42a7-9c24-e1ac939805c4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [00b9d871-3c8b-42a7-9c24-e1ac939805c4] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003919613s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-629838 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-629838 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-629838 --alsologtostderr -v=3: (12.028294606s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-629838 -n embed-certs-629838
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-629838 -n embed-certs-629838: exit status 7 (68.254815ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-629838 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (54.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-629838 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-629838 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (54.022412408s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-629838 -n embed-certs-629838
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (54.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hqxgb" [c3f77740-952e-48ea-b5fe-d07800ef585f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003457221s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hqxgb" [c3f77740-952e-48ea-b5fe-d07800ef585f] Running
E1027 20:01:46.045020  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/functional-647336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003285882s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-300878 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-300878 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (78.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-073048 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-073048 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m18.607291271s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (78.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zplzg" [89247be4-8f07-4d93-8a87-0335df591788] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003139792s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zplzg" [89247be4-8f07-4d93-8a87-0335df591788] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00418384s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-629838 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-629838 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (41.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-702588 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1027 20:02:58.680384  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:02:58.686649  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:02:58.697968  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:02:58.719324  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:02:58.760684  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:02:58.842060  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:02:59.003675  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:02:59.325680  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:02:59.967813  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:03:01.249364  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:03:03.811526  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:03:08.933140  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-702588 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (41.723505405s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (41.72s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-073048 create -f testdata/busybox.yaml
E1027 20:03:19.175239  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [53db98e8-ffba-4a6b-b0b4-8145690263ae] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [53db98e8-ffba-4a6b-b0b4-8145690263ae] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.00369143s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-073048 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-702588 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-702588 --alsologtostderr -v=3: (1.491841182s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-073048 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-073048 --alsologtostderr -v=3: (12.304509608s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-702588 -n newest-cni-702588
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-702588 -n newest-cni-702588: exit status 7 (106.555619ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-702588 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-702588 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1027 20:03:39.657157  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-702588 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (15.919287537s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-702588 -n newest-cni-702588
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-073048 -n default-k8s-diff-port-073048
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-073048 -n default-k8s-diff-port-073048: exit status 7 (143.225177ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-073048 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (62.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-073048 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-073048 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m2.242185881s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-073048 -n default-k8s-diff-port-073048
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (62.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-702588 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (61.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-750423 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1027 20:04:20.618512  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:04:45.062746  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-750423 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m1.878555576s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (61.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-lrj9p" [a3673ae3-9469-4d0e-9186-0b159e83baa7] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003396106s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-lrj9p" [a3673ae3-9469-4d0e-9186-0b159e83baa7] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004316096s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-073048 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-073048 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-750423 "pgrep -a kubelet"
I1027 20:05:02.982877  267880 config.go:182] Loaded profile config "custom-flannel-750423": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-750423 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-g54vq" [ce9967ed-5209-4399-a407-6f9201492bdf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-g54vq" [ce9967ed-5209-4399-a407-6f9201492bdf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.003506s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (89.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-750423 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-750423 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m29.833889121s)
--- PASS: TestNetworkPlugins/group/auto/Start (89.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-750423 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-750423 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-750423 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (83.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-750423 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1027 20:05:42.540106  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:05:59.286678  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-750423 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m23.281781791s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (83.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-750423 "pgrep -a kubelet"
I1027 20:06:39.310128  267880 config.go:182] Loaded profile config "auto-750423": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-750423 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kt4tm" [59161a2d-acd1-427a-bdf1-de06f2ea795a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1027 20:06:40.248040  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-kt4tm" [59161a2d-acd1-427a-bdf1-de06f2ea795a] Running
E1027 20:06:46.044620  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/functional-647336/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003341358s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-750423 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-750423 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-750423 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-z56p6" [663d72f4-3043-4e48-8f6d-61d3dfd769d8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00404409s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-750423 "pgrep -a kubelet"
I1027 20:07:11.397448  267880 config.go:182] Loaded profile config "kindnet-750423": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-750423 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-m7jvb" [2600ddcd-d271-4716-9618-c8d44811d807] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-m7jvb" [2600ddcd-d271-4716-9618-c8d44811d807] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004414932s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (63.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-750423 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-750423 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m3.444605591s)
--- PASS: TestNetworkPlugins/group/flannel/Start (63.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-750423 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-750423 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-750423 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (74.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-750423 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1027 20:07:58.680178  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:08:02.170236  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-750423 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m14.454839457s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (74.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-qcgg6" [f2ad2a0a-1995-46ef-9fe8-2bc84a746060] Running
E1027 20:08:19.248378  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:08:19.254675  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:08:19.265985  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:08:19.287332  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:08:19.328691  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:08:19.410020  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:08:19.571463  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:08:19.893288  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:08:20.535070  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003998956s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-750423 "pgrep -a kubelet"
I1027 20:08:21.332471  267880 config.go:182] Loaded profile config "flannel-750423": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-750423 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-pn8h7" [c81fca25-9932-497b-b9c7-d9b06cf930e1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1027 20:08:21.816371  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:08:24.378798  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:08:26.381408  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/old-k8s-version-942644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-pn8h7" [c81fca25-9932-497b-b9c7-d9b06cf930e1] Running
E1027 20:08:29.500809  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.00352424s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-750423 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-750423 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-750423 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (87.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-750423 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1027 20:09:00.231099  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/default-k8s-diff-port-073048/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-750423 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m27.342100809s)
--- PASS: TestNetworkPlugins/group/bridge/Start (87.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-750423 "pgrep -a kubelet"
I1027 20:09:04.569793  267880 config.go:182] Loaded profile config "enable-default-cni-750423": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-750423 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8wmsw" [d26eb160-b0de-4645-a63a-4abf1f68f1f2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8wmsw" [d26eb160-b0de-4645-a63a-4abf1f68f1f2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004372924s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-750423 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-750423 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-750423 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (64.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-750423 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
E1027 20:09:45.064214  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/addons-101592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:10:03.374966  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:10:03.381364  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:10:03.392742  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:10:03.414127  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:10:03.455417  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:10:03.536705  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:10:03.698223  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:10:04.019979  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:10:04.661623  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:10:05.943224  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:10:08.505009  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:10:13.627068  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 20:10:18.306618  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-750423 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m4.116315821s)
--- PASS: TestNetworkPlugins/group/calico/Start (64.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-750423 "pgrep -a kubelet"
I1027 20:10:23.027472  267880 config.go:182] Loaded profile config "bridge-750423": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-750423 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lv2h4" [b4640a1d-3989-410e-a00c-31095bb96b25] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1027 20:10:23.868657  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/custom-flannel-750423/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-lv2h4" [b4640a1d-3989-410e-a00c-31095bb96b25] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.005516602s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-750423 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-750423 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-750423 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-s96vs" [6e08dce5-0995-45de-ba4a-534ffb817d26] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
E1027 20:10:46.012257  267880 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/no-preload-300878/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "calico-node-s96vs" [6e08dce5-0995-45de-ba4a-534ffb817d26] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003486786s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-750423 "pgrep -a kubelet"
I1027 20:10:51.975287  267880 config.go:182] Loaded profile config "calico-750423": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-750423 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-p59m8" [271b4451-58c3-4a38-b3e3-90696c103b5a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-p59m8" [271b4451-58c3-4a38-b3e3-90696c103b5a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.003965662s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-750423 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-750423 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-750423 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    

Test skip (31/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.42s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-980377 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-980377" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-980377
--- SKIP: TestDownloadOnlyKic (0.42s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:34: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-230052" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-230052
--- SKIP: TestStartStop/group/disable-driver-mounts (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-750423 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-750423

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-750423

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-750423

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-750423

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-750423

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-750423

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-750423

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-750423

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-750423

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-750423

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-750423"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-750423"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-750423"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-750423

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-750423"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-750423"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-750423" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-750423" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-750423" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-750423" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-750423" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-750423" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-750423" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-750423" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-750423"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-750423"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-750423"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-750423"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-750423"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-750423" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-750423" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-750423" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-750423"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-750423"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-750423"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-750423"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-750423"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Oct 2025 19:54:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-524430
contexts:
- context:
cluster: kubernetes-upgrade-524430
extensions:
- extension:
last-update: Mon, 27 Oct 2025 19:54:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-524430
name: kubernetes-upgrade-524430
current-context: kubernetes-upgrade-524430
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-524430
user:
client-certificate: /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/kubernetes-upgrade-524430/client.crt
client-key: /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/kubernetes-upgrade-524430/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-750423

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-750423"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-750423"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-750423"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-750423"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-750423"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-750423"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-750423"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-750423"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-750423"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-750423"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-750423"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-750423"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-750423"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-750423"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-750423"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-750423"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-750423"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-750423"

                                                
                                                
----------------------- debugLogs end: kubenet-750423 [took: 3.306593779s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-750423" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-750423
--- SKIP: TestNetworkPlugins/group/kubenet (3.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-750423 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-750423

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-750423

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-750423

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-750423

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-750423

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-750423

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-750423

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-750423

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-750423

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-750423

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-750423"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-750423"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-750423"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-750423

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-750423"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-750423"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-750423" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-750423" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-750423" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-750423" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-750423" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-750423" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-750423" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-750423" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-750423"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-750423"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-750423"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-750423"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-750423"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-750423

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-750423

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-750423" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-750423" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-750423

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-750423

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-750423" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-750423" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-750423" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-750423" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-750423" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-750423"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-750423"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-750423"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-750423"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-750423"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21801-266035/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Oct 2025 19:54:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-524430
contexts:
- context:
cluster: kubernetes-upgrade-524430
extensions:
- extension:
last-update: Mon, 27 Oct 2025 19:54:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-524430
name: kubernetes-upgrade-524430
current-context: kubernetes-upgrade-524430
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-524430
user:
client-certificate: /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/kubernetes-upgrade-524430/client.crt
client-key: /home/jenkins/minikube-integration/21801-266035/.minikube/profiles/kubernetes-upgrade-524430/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-750423

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-750423"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-750423"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-750423"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-750423"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-750423"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-750423"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-750423"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-750423"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-750423"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-750423"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-750423"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-750423"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-750423"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-750423"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-750423"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-750423"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-750423"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-750423" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-750423"

                                                
                                                
----------------------- debugLogs end: cilium-750423 [took: 3.694571952s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-750423" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-750423
--- SKIP: TestNetworkPlugins/group/cilium (3.84s)

                                                
                                    
Copied to clipboard